Accurate automatic estimation of total intracranial volume: a nuisance variable with less nuisance.
Malone, Ian B; Leung, Kelvin K; Clegg, Shona; Barnes, Josephine; Whitwell, Jennifer L; Ashburner, John; Fox, Nick C; Ridgway, Gerard R
2015-01-01
Total intracranial volume (TIV/ICV) is an important covariate for volumetric analyses of the brain and brain regions, especially in the study of neurodegenerative diseases, where it can provide a proxy of maximum pre-morbid brain volume. The gold-standard method is manual delineation of brain scans, but this requires careful work by trained operators. We evaluated Statistical Parametric Mapping 12 (SPM12) automated segmentation for TIV measurement in place of manual segmentation and also compared it with SPM8 and FreeSurfer 5.3.0. For T1-weighted MRI acquired from 288 participants in a multi-centre clinical trial in Alzheimer's disease we find a high correlation between SPM12 TIV and manual TIV (R(2)=0.940, 95% Confidence Interval (0.924, 0.953)), with a small mean difference (SPM12 40.4±35.4ml lower than manual, amounting to 2.8% of the overall mean TIV in the study). The correlation with manual measurements (the key aspect when using TIV as a covariate) for SPM12 was significantly higher (p<0.001) than for either SPM8 (R(2)=0.577 CI (0.500, 0.644)) or FreeSurfer (R(2)=0.801 CI (0.744, 0.843)). These results suggest that SPM12 TIV estimates are an acceptable substitute for labour-intensive manual estimates even in the challenging context of multiple centres and the presence of neurodegenerative pathology. We also briefly discuss some aspects of the statistical modelling approaches to adjust for TIV. PMID:25255942
Palmstrom, Christin R.
2015-01-01
There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858
Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne
2016-08-01
Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967
Organ volume estimation using SPECT
Zaidi, H.
1996-06-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang`s algorithm. The dual window method was used for scatter subtraction. The author used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of (1) fixed thresholding, (2) automatic thresholding, (3) attenuation, (4) scatter, and (5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are performed. The relative error is within 7% for the GLH method combined with attenuation and scatter corrections.
Age estimation from canine volumes.
De Angelis, Danilo; Gaudio, Daniel; Guercini, Nicola; Cipriani, Filippo; Gibelli, Daniele; Caputi, Sergio; Cattaneo, Cristina
2015-08-01
Techniques for estimation of biological age are constantly evolving and are finding daily application in the forensic radiology field in cases concerning the estimation of the chronological age of a corpse in order to reconstruct the biological profile, or of a living subject, for example in cases of immigration of people without identity papers from a civil registry. The deposition of teeth secondary dentine and consequent decrease of pulp chamber in size are well known as aging phenomena, and they have been applied to the forensic context by the development of age estimation procedures, such as Kvaal-Solheim and Cameriere methods. The present study takes into consideration canines pulp chamber volume related to the entire teeth volume, with the aim of proposing new regression formulae for age estimation using 91 cone beam computerized scans and a freeware open-source software, in order to permit affordable reproducibility of volumes calculation.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
31 CFR 205.24 - How are accurate estimates maintained?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...
Micromagnetometer calibration for accurate orientation estimation.
Zhang, Zhi-Qiang; Yang, Guang-Zhong
2015-02-01
Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625
Modeling of landslide volume estimation
NASA Astrophysics Data System (ADS)
Amirahmadi, Abolghasem; Pourhashemi, Sima; Karami, Mokhtar; Akbari, Elahe
2016-06-01
Mass displacement of materials such as landslide is considered among problematic phenomena in Baqi Basin located at southern slopes of Binaloud, Iran; since, it destroys agricultural lands and pastures and also increases deposits at the basin exit. Therefore, it is necessary to identify areas which are sensitive to landslide and estimate the significant volume. In the present study, in order to estimate the volume of landslide, information about depth and area of slides was collected; then, considering regression assumptions, a power regression model was given which was compared with 17 suggested models in various regions in different countries. The results showed that values of estimated mass obtained from the suggested model were consistent with observed data (P value= 0.000 and R = 0.692) and some of the existing relations which implies on efficiency of the suggested model. Also, relations that were created in small-area landslides were more suitable rather than the ones created in large-area landslides for using in Baqi Basin. According to the suggested relation, average depth value of landslides was estimated 3.314 meters in Baqi Basin which was close to the observed value, 4.609 m.
New simple method for fast and accurate measurement of volumes
NASA Astrophysics Data System (ADS)
Frattolillo, Antonio
2006-04-01
A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
Estimation of feline renal volume using computed tomography and ultrasound.
Tyson, Reid; Logsdon, Stacy A; Werre, Stephen R; Daniel, Gregory B
2013-01-01
Renal volume estimation is an important parameter for clinical evaluation of kidneys and research applications. A time efficient, repeatable, and accurate method for volume estimation is required. The purpose of this study was to describe the accuracy of ultrasound and computed tomography (CT) for estimating feline renal volume. Standardized ultrasound and CT scans were acquired for kidneys of 12 cadaver cats, in situ. Ultrasound and CT multiplanar reconstructions were used to record renal length measurements that were then used to calculate volume using the prolate ellipsoid formula for volume estimation. In addition, CT studies were reconstructed at 1 mm, 5 mm, and 1 cm, and transferred to a workstation where the renal volume was calculated using the voxel count method (hand drawn regions of interest). The reference standard kidney volume was then determined ex vivo using water displacement with the Archimedes' principle. Ultrasound measurement of renal length accounted for approximately 87% of the variability in renal volume for the study population. The prolate ellipsoid formula exhibited proportional bias and underestimated renal volume by a median of 18.9%. Computed tomography volume estimates using the voxel count method with hand-traced regions of interest provided the most accurate results, with increasing accuracy for smaller voxel sizes in grossly normal kidneys (-10.1 to 0.6%). Findings from this study supported the use of CT and the voxel count method for estimating feline renal volume in future clinical and research studies. PMID:23278991
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Practical do-it-yourself device for accurate volume measurement of breast.
Tezel, E; Numanoğlu, A
2000-03-01
A simple and accurate method of measuring differences in breast volume based on Archimedes' principle is described. In this method, a plastic container is placed on the breast of the patient who is lying in supine position. While the breast occupies part of the container, the remaining part is filled with water and the volume is measured. This method allows the measurement of the volume differences of asymmetric breasts and also helps the surgeon to estimate the size of the prosthesis to be used in augmentation mammaplasty. PMID:10724264
Partial volume estimation using continuous representations
NASA Astrophysics Data System (ADS)
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid
2001-07-01
This paper presents a new method for partial volume estimation using standard eigenimage method and B-splines. The proposed method is applied on the multi-parameter volumetric images such as MRI. The proposed approach uses the B-spline bases (kernels) to interpolate a continuous 2D surface or 3D density function for a sampled image dataset. It uses the Fourier domain to calculate the interpolation coefficients for each data point. Then, the above interpolation is incorporated into the standard eigenimage method. This incorporation provides a particular mask depending on the B-spline basis used. To estimate the partial volumes, this mask is convolved with the interpolation coefficients and then the eigenimage transformation is applied on the convolution result. To evaluate the method, images scanned from a 3D simulation model are used. The simulation provides images similar to CSF, white matter, and gray matter of the human brain in T1-, T2-, and PD-weighted MRI. The performance of the new method is also compared to that of the polynomial estimators.1 The results show that the new estimators have standard deviations less than that of the eigenimage method (up to 25%) and larger than those of the polynomial estimators (up to 45%). The new estimators have superior capabilities compared to that of the polynomial ones in that they provide an arbitrary degree of continuity at the boundaries of pixels/voxels. As a result, employing the new method, a continuous, smooth, and very accurate contour/surface of the desired object can be generated. The new B-spline estimators are faster than the polynomial estimators but they are slower than the standard eigenimage method.
Accurate Parameter Estimation for Unbalanced Three-Phase System
Chen, Yuan
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.
Estimating the volume of the First Dorsal Interossoeus using ultrasound.
Infantolino, Benjamin W; Challis, John H
2011-04-01
Accurate in vivo estimation of muscle volume is important as it indicates the amount of power a muscle can produce. By tracking muscle volume changes in vivo, a muscle's response to disease or rehabilitation training can be quantified. The purpose of this study was to validate the use of imaging ultrasound to estimate the volume of a small muscle, specifically the First Dorsal Interosseous (FDI) muscle. The perimeter of the FDI was imaged using ultrasound in 22 cadaver hands. For each FDI, serial cross-sectional areas were determined by manual digitization, volumes were then estimated using the Cavalieri principle. The muscles were then dissected from the cadavers, and muscle volume was determined via the water displacement method. The water displacement measures of muscle volumes were used as the criterion, and compared with those estimated via ultrasound. A Bland-Altman plot illustrated that all measures fell within the 95% confidence interval, with no statistical evidence of changes in measurement accuracy with size of specimen, or of a constant deviation in the accuracy of estimated volumes. For superficial muscles these results indicate that ultrasound imaging is an accurate method for determining muscle volumes in vivo even for a relatively small muscle (volume ∼4 mL). PMID:21112233
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
An accurate link correlation estimator for improving wireless protocol performance.
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-02-12
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Accurate photometric redshift probability density estimation - method comparison and application
NASA Astrophysics Data System (ADS)
Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-10-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Accurate estimators of correlation functions in Fourier space
NASA Astrophysics Data System (ADS)
Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.
2016-08-01
Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.
Accurate Determination of the Volume of an Irregular Helium Balloon
NASA Astrophysics Data System (ADS)
Blumenthal, Jack; Bradvica, Rafaela; Karl, Katherine
2013-02-01
In a recent paper, Zable described an experiment with a near-spherical balloon filled with impure helium. Measuring the temperature and the pressure inside and outside the balloon, the lift of the balloon, and the mass of the balloon materials, he described how to use the ideal gas laws and Archimedes' principal to compute the average molecular mass and density of the impure helium. This experiment required that the volume of the near-spherical balloon be determined by some approach, such as measuring the girth. The accuracy of the experiment was largely determined by the balloon volume, which had a reported uncertainty of about 4%.
Accurate heart rate estimation from camera recording via MUSIC algorithm.
Fouladi, Seyyed Hamed; Balasingham, Ilangko; Ramstad, Tor Audun; Kansanen, Kimmo
2015-01-01
In this paper, we propose an algorithm to extract heart rate frequency from video camera using the Multiple SIgnal Classification (MUSIC) algorithm. This leads to improved accuracy of the estimated heart rate frequency in cases the performance is limited by the number of samples and frame rate. Monitoring vital signs remotely can be exploited for both non-contact physiological and psychological diagnosis. The color variation recorded by ordinary cameras is used for heart rate monitoring. The orthogonality between signal space and noise space is used to find more accurate heart rate frequency in comparison with traditional methods. It is shown via experimental results that the limitation of previous methods can be overcome by using subspace methods. PMID:26738015
Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion
Yadav, Nagesh; Bleakley, Chris
2014-01-01
Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584
Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua
2012-06-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.
CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY
Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.
2003-02-27
The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume
Be the Volume: A Classroom Activity to Visualize Volume Estimation
ERIC Educational Resources Information Center
Mikhaylov, Jessica
2011-01-01
A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…
Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report
Field, Jim G.
2013-03-27
Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure.
Tofts, P S; Silver, N C; Barker, G J; Gass, A
2005-07-01
There are currently four problems in characterising small nonuniform lesions or other objects in Magnetic Resonance images where partial volume effects are significant. Object size is over- or under-estimated; boundaries are often not reproducible; mean object value cannot be measured; and fuzzy borders cannot be accommodated. A new measure, Object Strength, is proposed. This is the sum of all abnormal intensities, above a uniform background value. For a uniform object, this is simply the product of the increase in intensity and the size of the object. Biologically, this could be at least as relevant as existing measures of size or mean intensity. We hypothesise that Object Strength will perform better than traditional area measurements in characterising small objects. In a pilot study, the reproducibility of object strength measurements was investigated using MR images of small multiple sclerosis (MS) lesions. In addition, accuracy was investigated using artificial lesions of known volume (0.3-6.2 ml) and realistic appearance. Reproducibility approached that of area measurements (in 33/90 lesion reports the difference between repeats was less than for area measurements). Total lesion volume was accurate to 0.2%. In conclusion, Object Strength has potential for improved characterisation of small lesions and objects in imaging and possibly spectroscopy.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.
2016-03-01
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
Does more accurate exposure prediction necessarily improve health effect estimates?
Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne
2011-09-01
A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
Simulation model accurately estimates total dietary iodine intake.
Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C
2009-07-01
One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (<5%) were at risk of intakes that were too low. In the scenario of a potential future situation using lower salt iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Can student health professionals accurately estimate alcohol content in commonly occurring drinks?
Sinclair, Julia; Searle, Emma
2016-01-01
Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344
Hatt, Mathieu; Cheze le Rest, Catherine; Descourt, Patrice; Dekker, Andre; De Ruysscher, Dirk; Oellers, Michel; Lambin, Philippe; Pradier, Olivier; Visvikis, Dimitris
2010-05-01
Purpose: Accurate contouring of positron emission tomography (PET) functional volumes is now considered crucial in image-guided radiotherapy and other oncology applications because the use of functional imaging allows for biological target definition. In addition, the definition of variable uptake regions within the tumor itself may facilitate dose painting for dosimetry optimization. Methods and Materials: Current state-of-the-art algorithms for functional volume segmentation use adaptive thresholding. We developed an approach called fuzzy locally adaptive Bayesian (FLAB), validated on homogeneous objects, and then improved it by allowing the use of up to three tumor classes for the delineation of inhomogeneous tumors (3-FLAB). Simulated and real tumors with histology data containing homogeneous and heterogeneous activity distributions were used to assess the algorithm's accuracy. Results: The new 3-FLAB algorithm is able to extract the overall tumor from the background tissues and delineate variable uptake regions within the tumors, with higher accuracy and robustness compared with adaptive threshold (T{sub bckg}) and fuzzy C-means (FCM). 3-FLAB performed with a mean classification error of less than 9% +- 8% on the simulated tumors, whereas binary-only implementation led to errors of 15% +- 11%. T{sub bckg} and FCM led to mean errors of 20% +- 12% and 17% +- 14%, respectively. 3-FLAB also led to more robust estimation of the maximum diameters of tumors with histology measurements, with <6% standard deviation, whereas binary FLAB, T{sub bckg} and FCM lead to 10%, 12%, and 13%, respectively. Conclusion: These encouraging results warrant further investigation in future studies that will investigate the impact of 3-FLAB in radiotherapy treatment planning, diagnosis, and therapy response evaluation.
Bioaccessibility tests accurately estimate bioavailability of lead to quail.
Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S
2016-09-01
Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
[Definition of accurate planning target volume margins for esophageal cancer radiotherapy].
Lesueur, P; Servagi-Vernat, S
2016-10-01
More than 4000 cases of esophagus neoplasms are diagnosed every year in France. Radiotherapy, which can be delivered in preoperative or exclusive with a concomitant chemotherapy, plays a central role in treatment of esophagus cancer. Even if efficacy of radiotherapy no longer has to be proved, the prognosis of esophagus cancer remains unfortunately poor with a high recurrence rate. Toxicity of esophageal radiotherapy is correlated with the irradiation volume, and limits dose escalation and local control. Esophagus is a deep thoracic organ, which undergoes cardiac and respiratory motion, making the radiotherapy delivery more difficult and increasing the planning target volume margins. Definition of accurate planning target volume margins, taking into account the esophagus' intrafraction motion and set up margins is very important to be sure to cover the clinical target volume and restrains acute and late radiotoxicity. In this article, based on a review of the literature, we propose planning target volume margins adapted to esophageal radiotherapy.
A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Hsu, Andrew T.
1989-01-01
A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.
Kronenberg, M.W.; Parrish, M.D.; Jenkins, D.W. Jr.; Sandler, M.P.; Friesinger, G.C.
1985-11-01
Estimation of left ventricular end-systolic pressure-volume relations depends on the accurate measurement of small changes in ventricular volume. To study the accuracy of radionuclide ventriculography, paired radionuclide and contrast ventriculograms were obtained in seven dogs during a control period and when blood pressure was increased in increments of 30 mm Hg by phenylephrine infusion. The heart rate was held constant by atropine infusion. The correlation between radionuclide and contrast ventriculography was excellent. The systolic pressure-volume relations were linear for both radionuclide and contrast ventriculography. The mean slope for radionuclide ventriculography was lower than the mean slope for contrast ventriculography; however, the slopes correlated well. The radionuclide-contrast volume relation was compared using background subtraction, attenuation correction, neither of these or both. By each method, radionuclide ventriculography was valid for measuring small changes in left ventricular volume and for defining end-systolic pressure-volume relations.
Accurate method to study static volume-pressure relationships in small fetal and neonatal animals.
Suen, H C; Losty, P D; Donahoe, P K; Schnitzer, J J
1994-08-01
We designed an accurate method to study respiratory static volume-pressure relationships in small fetal and neonatal animals on the basis of Archimedes' principle. Our method eliminates the error caused by the compressibility of air (Boyle's law) and is sensitive to a volume change of as little as 1 microliters. Fetal and neonatal rats during the period of rapid lung development from day 19.5 of gestation (term = day 22) to day 3.5 postnatum were studied. The absolute lung volume at a transrespiratory pressure of 30-40 cmH2O increased 28-fold from 0.036 +/- 0.006 (SE) to 0.994 +/- 0.042 ml, the volume per gram of lung increased 14-fold from 0.39 +/- 0.07 to 5.59 +/- 0.66 ml/g, compliance increased 12-fold from 2.3 +/- 0.4 to 27.3 +/- 2.7 microliters/cmH2O, and specific compliance increased 6-fold from 24.9 +/- 4.5 to 152.3 +/- 22.8 microliters.cmH2O-1.g lung-1. This technique, which allowed us to compare changes during late gestation and the early neonatal period in small rodents, can be used to monitor and evaluate pulmonary functional changes after in utero pharmacological therapies in experimentally induced abnormalities such as pulmonary hypoplasia, surfactant deficiency, and congenital diaphragmatic hernia. PMID:8002489
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
Measurement of testicular volume in smaller testes: how accurate is the conventional orchidometer?
Lin, Chih-Chieh; Huang, William J S; Chen, Kuang-Kuo
2009-01-01
The aim of this study was to evaluate the accuracy of different methods, including the Seager orchidometer (SO) and ultrasonography (US), for assessing testicular volume of smaller testes (testes volume less than 18 mL). Moreover, the equations used for the calculations--the Hansen formula (length [L] x width [W](2) x 0.52, equation A), the prolate ellipsoid formula (L x W x height [H] x 0.52, equation B), and the Lambert equation (L x W x H x 0.71, equation C)--were also examined and compared with the gold standard testicular volume obtained by water displacement (Archimedes principle). In this study, 30 testes from 15 men, mean age 75.3 (+/-8.3) years, were included. They all had advanced prostate cancer and were admitted for orchiectomy. Before the procedure, all the testes were assessed using SO and US. The dimensions were then input into each equation to obtain the volume estimates. The testicular volume by water displacement was 8.1 +/- 3.5 mL. Correlation coefficients (R(2)) of the 2 different methods (SO, US) to the gold standard were 0.70 and 0.85, respectively. The calculated testicular volumes were 9.2 +/- 3.9 mL (measured by SO, equation A), 11.9 +/- 5.2 mL (measured by SO, equation C), 7.3 +/- 4.2 mL (measured by US, equation A), 6.5 +/- 3.3 mL (measured by US, equation B) and 8.9 +/- 4.5 mL (measured by US, equation C). Only the mean size measured by US and volume calculated with the Hansen equation (equation A) and the mean size measured by US and volume calculated with the Lambert equation (equation C) showed no significant differences when compared with the volumes estimated by water displacement (mean difference 0.81 mL, P = .053, and 0.81 mL, P = .056, respectively). Based on our measurements, we categorized testicular volume by different cutoff values (7.0 mL, 7.5 mL, 8.0 mL, and 8.5 mL) to calculate a new constant for use in the Hansen equation. The new constant was 0.59. We then reexamined the equations using the new 0.59 constant, and found
Accurately measuring volume of soil samples using low cost Kinect 3D scanner
NASA Astrophysics Data System (ADS)
van der Sterre, Boy-Santhos; Hut, Rolf; van de Giesen, Nick
2013-04-01
The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the 150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.
Accurately measuring volume of soil samples using low cost Kinect 3D scanner
NASA Astrophysics Data System (ADS)
van der Sterre, B.; Hut, R.; Van De Giesen, N.
2012-12-01
The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the $150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.
Using Photogrammetry to Estimate Tank Waste Volumes from Video
Field, Jim G.
2013-03-27
Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Development of Classification and Story Building Data for Accurate Earthquake Damage Estimation
NASA Astrophysics Data System (ADS)
Sakai, Yuki; Fukukawa, Noriko; Arai, Kensuke
We investigated the method of developing classification and story building data from census population database in order to estimate earthquake damage more accurately especially in the urban area presuming that there are correlation between numbers of non-wooden or high-rise buildings and the population. We formulated equations of estimating numbers of wooden houses, low-to-mid-rise(1-9 story) and high-rise(over 10 story) non-wooden buildings in the 1km mesh from night and daytime population database based on the building data we investigated and collected in the selected 20 meshs in Kanto area. We could accurately estimate the numbers of three classified buildings by the formulated equations, but in some special cases, such as the apartment block mesh, the estimated values are quite different from actual values.
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Hu, Weigang
2016-01-01
Purpose 4DCT delineated internal target volume (ITV) was applied to determine the tumor motion and used as planning target in treatment planning in lung cancer stereotactic body radiotherapy (SBRT). This work is to study the accuracy of using ITV to predict the real target dose in lung cancer SBRT. Materials and methods Both for phantom and patient cases, the ITV and gross tumor volumes (GTVs) were contoured on the maximum intensity projection (MIP) CT and ten CT phases, respectively. A SBRT plan was designed using ITV as the planning target on average projection (AVG) CT. This plan was copied to each CT phase and the dose distribution was recalculated. The GTV_4D dose was acquired through accumulating the GTV doses over all ten phases and regarded as the real target dose. To analyze the ITV dose error, the ITV dose was compared to the real target dose by endpoints of D99, D95, D1 (doses received by the 99%, 95% and 1% of the target volume), and dose coverage endpoint of V100(relative volume receiving at least the prescription dose). Results The phantom study shows that the ITV underestimates the real target dose by 9.47%∼19.8% in D99, 4.43%∼15.99% in D95, and underestimates the dose coverage by 5% in V100. The patient cases show that the ITV underestimates the real target dose and dose coverage by 3.8%∼10.7% in D99, 4.7%∼7.2% in D95, and 3.96%∼6.59% in V100 in motion target cases. Conclusions Cautions should be taken that ITV is not accurate enough to predict the real target dose in lung cancer SBRT with large tumor motions. Restricting the target motion or reducing the target dose heterogeneity could reduce the ITV dose underestimation effect in lung SBRT. PMID:26968812
An evaluation of tympanometric estimates of ear canal volume.
Shanks, J E; Lilly, D J
1981-12-01
The accuracy of tympanometric estimates of ear canal volume was evaluated by testing the following two assumptions on which the procedure is based: (a) ear canal volume does not change when ear canal pressure is varied, and (b) an ear canal pressure of 200 daPa drives the impedance of the middle ear transmission system to infinity so the immittance measured at 200 daPa can be attributed to the ear canal volume alone. The first assumption was tested by measuring the changes in ear canal volume in eight normal subjects for ear canal pressures between +/- 400 daPa using a manometric procedure based on Boyle's gas law. The data did not support the first assumption. Ear canal volume changed by a mean of .113 ml over the +/- 400 daPa pressure range with slightly larger volume changes occurring for negative ear canal pressures than for positive ear canal pressures. Most of the volume change was attributed to movement of the probe and to movement of the cartilaginous walls of the ear canal. The second assumption was tested by comparing estimates of ear canal volume from susceptance tympanograms with a direct measurement of ear canal volume adjusted for changes in volume due to changes in ear canal pressure between +/- 400 daPa. These data failed to support the second assumption. All tympanometric estimates of ear canal volume were larger than the measured volumes. The largest error (39%) occurred for an ear canal pressure of 200 daPa at 220 Hz, whereas the smallest error (10%) occurred for an ear canal pressure of -400 daPa at 660 Hz. This latter susceptance value (-400 daPa at 660 Hz) divided by three is suggested to correct the 220-Hz tympanogram to the plane of the tympanic membrane. Finally, the effects of errors in estimating ear canal volume on static immittance and on tympanometry are discussed. PMID:7329051
Estimating Residual Solids Volume In Underground Storage Tanks
Clark, Jason L.; Worthy, S. Jason; Martin, Bruce A.; Tihey, John R.
2014-01-08
The Savannah River Site liquid waste system consists of multiple facilities to safely receive and store legacy radioactive waste, treat, and permanently dispose waste. The large underground storage tanks and associated equipment, known as the 'tank farms', include a complex interconnected transfer system which includes underground transfer pipelines and ancillary equipment to direct the flow of waste. The waste in the tanks is present in three forms: supernatant, sludge, and salt. The supernatant is a multi-component aqueous mixture, while sludge is a gel-like substance which consists of insoluble solids and entrapped supernatant. The waste from these tanks is retrieved and treated as sludge or salt. The high level (radioactive) fraction of the waste is vitrified into a glass waste form, while the low-level waste is immobilized in a cementitious grout waste form called saltstone. Once the waste is retrieved and processed, the tanks are closed via removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations and severing/sealing external penetrations. The comprehensive liquid waste disposition system, currently managed by Savannah River Remediation, consists of 1) safe storage and retrieval of the waste as it is prepared for permanent disposition; (2) definition of the waste processing techniques utilized to separate the high-level waste fraction/low-level waste fraction; (3) disposition of LLW in saltstone; (4) disposition of the HLW in glass; and (5) closure state of the facilities, including tanks. This paper focuses on determining the effectiveness of waste removal campaigns through monitoring the volume of residual solids in the waste tanks. Volume estimates of the residual solids are performed by creating a map of the residual solids on the waste tank bottom using video and still digital images. The map is then used to calculate the volume of solids remaining in the waste tank. The ability to
Trabant, Dennis C.
1999-01-01
The volume of four of the largest glaciers on Iliamna Volcano was estimated using the volume model developed for evaluating glacier volumes on Redoubt Volcano. The volume model is controlled by simulated valley cross sections that are constructed by fitting third-order polynomials to the shape of the valley walls exposed above the glacier surface. Critical cross sections were field checked by sounding with ice-penetrating radar during July 1998. The estimated volumes of perennial snow and glacier ice for Tuxedni, Lateral, Red, and Umbrella Glaciers are 8.6, 0.85, 4.7, and 0.60 cubic kilometers respectively. The estimated volume of snow and ice on the upper 1,000 meters of the volcano is about 1 cubic kilometer. The volume estimates are thought to have errors of no more than ?25 percent. The volumes estimated for the four largest glaciers are more than three times the total volume of snow and ice on Mount Rainier and about 82 times the total volume of snow and ice that was on Mount St. Helens before its May 18, 1980 eruption. Volcanoes mantled by substantial snow and ice covers have produced the largest and most catastrophic lahars and floods. Therefore, it is prudent to expect that, during an eruptive episode, flooding and lahars threaten all of the drainages heading on Iliamna Volcano. On the other hand, debris avalanches can happen any time. Fortunately, their influence is generally limited to the area within a few kilometers of the summit.
Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?
Hey, Spencer Phillips; Kimmelman, Jonathan
2016-10-01
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach. PMID:27197044
Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?
Hey, Spencer Phillips; Kimmelman, Jonathan
2016-10-01
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.
Budget estimates fiscal year 1995: Volume 10
Not Available
1994-02-01
This report contains the Nuclear Regulatory Commission (NRC) fiscal year budget justification to Congress. The budget provides estimates for salaries and expenses and for the Office of the Inspector General for fiscal year 1995. The NRC 1995 budget request is $546,497,000. This is an increase of $11,497,000 above the proposed level for FY 1994. The NRC FY 1995 budget request is 3,218 FTEs. This is a decrease of 75 FTEs below the 1994 proposed level.
NASA Astrophysics Data System (ADS)
Gutenko, Ievgeniia; Peng, Hao; Gu, Xianfeng; Barish, Mathew; Kaufman, Arie
2016-03-01
Accurate estimation of splenic volume is crucial for the determination of disease progression and response to treatment for diseases that result in enlargement of the spleen. However, there is no consensus with respect to the use of single or multiple one-dimensional, or volumetric measurement. Existing methods for human reviewers focus on measurement of cross diameters on a representative axial slice and craniocaudal length of the organ. We propose two heuristics for the selection of the optimal axial plane for splenic volume estimation: the maximal area axial measurement heuristic and the novel conformal welding shape-based heuristic. We evaluate these heuristics on time-variant data derived from both healthy and sick subjects and contrast them to established heuristics. Under certain conditions our heuristics are superior to standard practice volumetric estimation methods. We conclude by providing guidance on selecting the optimal heuristic for splenic volume estimation.
Simplified Volume-Area-Depth Method for Estimating Water Storage of Isolated Prairie Wetlands
NASA Astrophysics Data System (ADS)
Minke, A. G.; Westbrook, C. J.; van der Kamp, G.
2009-05-01
There are millions of wetlands in shallow depressions on the North American prairies but the quantity of water stored in these depressions remains poorly understood. Hayashi and van der Kamp (2000) used the relationship between volume (V), area (A) and depth (h) to develop an equation for estimating wetland storage. We tested the robustness of their full and simplified V-A-h methods to accurately estimate volume for the range of wetland shapes occurring across the Prairie Pothole Region. These results were contrasted with two commonly implemented V-A regression equations to determine which method estimates volume most accurately. We used detailed topographic data for 27 wetlands in Smith Creek and St. Denis watersheds, Saskatchewan that ranged in surface area and basin shape. The full V-A-h method was found to accurately estimate storage (errors <3%) across wetlands of various shapes, and is therefore suitable for calculating water storage in the variety of wetland surface shapes found in the prairies. Both V-A equations performed poorly, with volume underestimated by an average of 15% and 50% Analysis of the simplified V-A-h method showed that volume errors of <10% can be achieved if the basin and shape coefficients are derived properly. This would involve measuring depth and area twice, with sufficient time between measurements that the natural fluctuations in water storage are reflected. Practically, wetland area and depth should be measured in spring, following snowmelt when water levels are near the peak, and also in late summer prior to water depths dropping below 10 cm. These guidelines for applying the simplified V-A-h method will allow for accurate volume estimations when detailed topographic data are not available. Since the V-A equations were outperformed by the full and simplified V-A-h methods, we conclude that wetland depth and basin morphology should be considered when estimating volume. This will improve storage estimations of natural and human
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Tuck, L.K.; Pearson, Daniel K.; Cannon, M.R.; Dutton, DeAnn M.
2013-01-01
The Tongue River Member of the Tertiary Fort Union Formation is the primary source of groundwater in the Northern Cheyenne Indian Reservation in southeastern Montana. Coal beds within this formation generally contain the most laterally extensive aquifers in much of the reservation. The U.S. Geological Survey, in cooperation with the Northern Cheyenne Tribe, conducted a study to estimate the volume of water in five coal aquifers. This report presents estimates of the volume of water in five coal aquifers in the eastern and southern parts of the Northern Cheyenne Indian Reservation: the Canyon, Wall, Pawnee, Knobloch, and Flowers-Goodale coal beds in the Tongue River Member of the Tertiary Fort Union Formation. Only conservative estimates of the volume of water in these coal aquifers are presented. The volume of water in the Canyon coal was estimated to range from about 10,400 acre-feet (75 percent saturated) to 3,450 acre-feet (25 percent saturated). The volume of water in the Wall coal was estimated to range from about 14,200 acre-feet (100 percent saturated) to 3,560 acre-feet (25 percent saturated). The volume of water in the Pawnee coal was estimated to range from about 9,440 acre-feet (100 percent saturated) to 2,360 acre-feet (25 percent saturated). The volume of water in the Knobloch coal was estimated to range from about 38,700 acre-feet (100 percent saturated) to 9,680 acre-feet (25 percent saturated). The volume of water in the Flowers-Goodale coal was estimated to be about 35,800 acre-feet (100 percent saturated). Sufficient data are needed to accurately characterize coal-bed horizontal and vertical variability, which is highly complex both locally and regionally. Where data points are widely spaced, the reliability of estimates of the volume of coal beds is decreased. Additionally, reliable estimates of the volume of water in coal aquifers depend heavily on data about water levels and data about coal-aquifer characteristics. Because the data needed to
Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.
Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro
2016-01-12
The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Contaminated Soil Volume Estimation at the Maywood Site - 12292
Johnson, Robert; Quinn, John; Durham, Lisa; Moore, James; Hays, David
2012-07-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program properties, Argonne National Laboratory assisted the U.S. Army Corps of Engineers (USACE) New York District in revising contaminated soil volume estimates for the remaining areas of the Stepan/Sears properties that require soil remediation. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations; both sampling results and down-hole gamma data were coded to identify whether the results indicated the presence of contamination above site cleanup requirements. Significant effort was invested in developing complete electronic data sets for the site by incorporating data contained in various scanned documents, maps, etc. The updating process yielded both a best guess estimate of contamination volumes and upper and lower bounds on the volume estimate that reflected the estimate's uncertainty. The site-wide contaminated volume estimate (with associated uncertainty) was adjusted to reflect areas where remediation was complete; the result was a revised estimate of the remaining soil volumes requiring remediation that the USACE could use for planning. Other environmental projects may benefit from this process for estimating the volume of contaminated soil. A comparison of sample and DHG results for various stations with the site ICSM provides
Comparison of volume estimation methods for pancreatic islet cells
NASA Astrophysics Data System (ADS)
Dvořák, JiřÃ.; Å vihlík, Jan; Habart, David; Kybic, Jan
2016-03-01
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.
Kerker, Bonnie D.; Owens, Pamela L.; Zigler, Edward; Horwitz, Sarah M.
2004-01-01
OBJECTIVES: The objectives of this literature review were to assess current challenges to estimating the prevalence of mental health disorders among individuals with mental retardation (MR) and to develop recommendations to improve such estimates for this population. METHODS: The authors identified 200 peer-reviewed articles, book chapters, government documents, or reports from national and international organizations on the mental health status of people with MR. Based on the study's inclusion criteria, 52 articles were included in the review. RESULTS: Available data reveal inconsistent estimates of the prevalence of mental health disorders among those with MR, but suggest that some mental health conditions are more common among these individuals than in the general population. Two main challenges to identifying accurate prevalence estimates were found: (1) health care providers have difficulty diagnosing mental health conditions among individuals with MR; and (2) methodological limitations of previous research inhibit confidence in study results. CONCLUSIONS: Accurate prevalence estimates are necessary to ensure the availability of appropriate treatment services. To this end, health care providers should receive more training regarding the mental health treatment of individuals with MR. Further, government officials should discuss mechanisms of collecting nationally representative data, and the research community should utilize consistent methods with representative samples when studying mental health conditions in this population. PMID:15219798
Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.
Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide
2003-03-15
Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks.
A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan
2015-01-01
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images
Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar
NASA Astrophysics Data System (ADS)
Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru
Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
Method for estimating absolute lung volumes at constant inflation pressure.
Hills, B A; Barrow, R E
1979-10-01
A method has been devised for measuring functional residual capacity in the intact killed animal or absolute lung volumes in any excised lung preparation without changing the inflation pressure. This is achieved by titrating the absolute pressure of a chamber in which the preparation is compressed until a known volume of air has entered the lungs. This technique was used to estimate the volumes of five intact rabbit lungs and five rigid containers of known dimensions by means of Boyle's law. Results were found to agree to within +/- 1% with values determined by alternative methods. In the discussion the advantage of determining absolute lung volumes at almost any stage in a study of lung mechanics without the determination itself changing inflation pressure and, hence, lung volume is emphasized. PMID:511699
NASA Technical Reports Server (NTRS)
Schlosser, Herbert; Ferrante, John
1989-01-01
An accurate analytic expression for the nonlinear change of the volume of a solid as a function of applied pressure is of great interest in high-pressure experimentation. It is found that a two-parameter analytic expression, fits the experimental volume-change data to within a few percent over the entire experimentally attainable pressure range. Results are presented for 24 different materials including metals, ceramic semiconductors, polymers, and ionic and rare-gas solids.
Estimating carbon stocks based on forest volume-age relationship
NASA Astrophysics Data System (ADS)
Hangnan, Y.; Lee, W.; Son, Y.; Kwak, D.; Nam, K.; Moonil, K.; Taesung, K.
2012-12-01
This research attempted to estimate potential change of forest carbon stocks between 2010 and 2110 in South Korea, using the forest cover map and National Forest Inventory (NFI) data. Allometric functions (logistic regression models) of volume-age relationships were developed to estimate carbon stock change during upcoming 100 years for Pinus densiflora, Pinus koraiensis, Pinus rigida, Larix kaempferi,and Quercus spp. The current forest volume was estimated with the developed regression model and 4th forest cover map. The future volume was predicted by developed volume-age models with adding n years to current age. As a result, we found that the total forest volume would increase from 126.89 m^3/ha to 246.61 m^3/ha and the carbon stocks would increase from 90.55 Mg C ha^(-1) to 174.62 Mg C ha^(-1) during 100 years when current forest remains unchanged. The carbon stocks would increase by approximately 0.84 Mg C ha^(-1) yr^(-1), which has high value if considering other northern countries' (Canada, Russia, China) -0.10 ~ 0.28 Mg C ha^(-1) yr^(-1) in pervious study. This can be attributed to the fact that mixed forest and bamboo forest in this study did not considered. Moreover, it must be influenced by that the change of carbon stocks was estimated without the consideration of mortality, thinning, and tree species' change in this study. ;
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Stereological estimation of particle shape and orientation from volume tensors.
Rafati, A H; Ziegel, J F; Nyengaard, J R; Jensen, E B Vedel
2016-09-01
In the present paper, we describe new robust methods of estimating cell shape and orientation in 3D from sections. The descriptors of 3D cell shape and orientation are based on volume tensors which are used to construct an ellipsoid, the Miles ellipsoid, approximating the average cell shape and orientation in 3D. The estimators of volume tensors are based on observations in several optical planes through sampled cells. This type of geometric sampling design is known as the optical rotator. The statistical behaviour of the estimator of the Miles ellipsoid is studied under a flexible model for 3D cell shape and orientation. In a simulation study, the lengths of the axes of the Miles ellipsoid can be estimated with coefficients of variation of about 2% if 100 cells are sampled. Finally, we illustrate the use of the developed methods in an example, involving neurons in the medial prefrontal cortex of rat.
Stereological estimation of particle shape and orientation from volume tensors.
Rafati, A H; Ziegel, J F; Nyengaard, J R; Jensen, E B Vedel
2016-09-01
In the present paper, we describe new robust methods of estimating cell shape and orientation in 3D from sections. The descriptors of 3D cell shape and orientation are based on volume tensors which are used to construct an ellipsoid, the Miles ellipsoid, approximating the average cell shape and orientation in 3D. The estimators of volume tensors are based on observations in several optical planes through sampled cells. This type of geometric sampling design is known as the optical rotator. The statistical behaviour of the estimator of the Miles ellipsoid is studied under a flexible model for 3D cell shape and orientation. In a simulation study, the lengths of the axes of the Miles ellipsoid can be estimated with coefficients of variation of about 2% if 100 cells are sampled. Finally, we illustrate the use of the developed methods in an example, involving neurons in the medial prefrontal cortex of rat. PMID:26823192
Rashid, Mamoon; Pain, Arnab
2013-01-01
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222
Estimating Volumes of Near-Spherical Molded Artifacts
Gilsinn, David E.; Borchardt, Bruce R.; Tebbe, Amelia
2010-01-01
The Food and Drug Administration (FDA) is conducting research on developing reference lung cancer lesions, called phantoms, to test computed tomography (CT) scanners and their software. FDA loaned two semi-spherical phantoms to the National Institute of Standards and Technology (NIST), called Green and Pink, and asked to have the phantoms’ volumes estimated. This report describes in detail both the metrology and computational methods used to estimate the phantoms’ volumes. Three sets of coordinate measuring machine (CMM) measured data were produced. One set of data involved reference surface data measurements of a known calibrated metal sphere. The other two sets were measurements of the two FDA phantoms at two densities, called the coarse set and the dense set. Two computational approaches were applied to the data. In the first approach spherical models were fit to the calibrated sphere data and to the phantom data. The second approach was to model the data points on the boundaries of the spheres with surface B-splines and then use the Divergence Theorem to estimate the volumes. Fitting a B-spline model to the calibrated sphere data was done as a reference check on the algorithm performance. It gave assurance that the volumes estimated for the phantoms would be meaningful. The results for the coarse and dense data sets tended to predict the volumes as expected and the results did show that the Green phantom was very near spherical. This was confirmed by both computational methods. The spherical model did not fit the Pink phantom as well and the B-spline approach provided a better estimate of the volume in that case. PMID:27134783
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.
Accurate Estimation of Carotid Luminal Surface Roughness Using Ultrasonic Radio-Frequency Echo
NASA Astrophysics Data System (ADS)
Kitamura, Kosuke; Hasegawa, Hideyuki; Kanai, Hiroshi
2012-07-01
It would be useful to measure the minute surface roughness of the carotid arterial wall to detect the early stage of atherosclerosis. In conventional ultrasonography, the axial resolution of a B-mode image depends on the ultrasonic wavelength of 150 µm at 10 MHz because a B-mode image is constructed using the amplitude of the radio-frequency (RF) echo. Therefore, the surface roughness caused by atherosclerosis in an early stage cannot be measured using a conventional B-mode image obtained by ultrasonography because the roughness is 10-20 µm. We have realized accurate transcutaneous estimation of such a minute surface profile using the lateral motion of the carotid arterial wall, which is estimated by block matching of received ultrasonic signals. However, the width of the region where the surface profile is estimated depends on the magnitude of the lateral displacement of the carotid arterial wall (i.e., if the lateral displacement of the arterial wall is 1 mm, the surface profile is estimated in a region of 1 mm in width). In this study, the width was increased by combining surface profiles estimated using several ultrasonic beams. In the present study, we first measured a fine wire, whose diameter was 13 µm, using ultrasonic equipment to obtain an ultrasonic beam profile for determination of the optimal kernel size for block matching based on the correlation between RF echoes. Second, we estimated the lateral displacement and surface profile of a phantom, which had a saw tooth profile on its surface, and compared the surface profile measured by ultrasound with that measured by a laser profilometer. Finally, we estimated the lateral displacement and surface roughness of the carotid arterial wall of three healthy subjects (24-, 23-, and 23-year-old males) using the proposed method.
Soil volume estimation in debris flow areas using lidar data in the 2014 Hiroshima, Japan rainstorm
NASA Astrophysics Data System (ADS)
Miura, H.
2015-10-01
Debris flows triggered by the rainstorm in Hiroshima, Japan on August 20th, 2014 produced extensive damage to the built-up areas in the northern part of Hiroshima city. In order to consider various emergency response activities and early-stage recovery planning, it is important to evaluate the distribution of the soil volumes in the debris flow areas immediately after the disaster. In this study, automated nonlinear mapping technique is applied to light detection and ranging (LiDAR)-derived digital elevation models (DEMs) observed before and after the disaster to quickly and accurately correct geometric locational errors of the data. The soil volumes generated from the debris flows are estimated by subtracting the pre- and post-event DEMs. The geomorphologic characteristics in the debris flow areas are discussed from the distribution of the estimated soil volumes.
Estimation of myocardial volume at risk from CT angiography
NASA Astrophysics Data System (ADS)
Zhu, Liangjia; Gao, Yi; Mohan, Vandana; Stillman, Arthur; Faber, Tracy; Tannenbaum, Allen
2011-03-01
The determination of myocardial volume at risk distal to coronary stenosis provides important information for prognosis and treatment of coronary artery disease. In this paper, we present a novel computational framework for estimating the myocardial volume at risk in computed tomography angiography (CTA) imagery. Initially, epicardial and endocardial surfaces, and coronary arteries are extracted using an active contour method. Then, the extracted coronary arteries are projected onto the epicardial surface, and each point on this surface is associated with its closest coronary artery using the geodesic distance measurement. The likely myocardial region at risk on the epicardial surface caused by a stenosis is approximated by the region in which all its inner points are associated with the sub-branches distal to the stenosis on the coronary artery tree. Finally, the likely myocardial volume at risk is approximated by the volume in between the region at risk on the epicardial surface and its projection on the endocardial surface, which is expected to yield computational savings over risk volume estimation using the entire image volume. Furthermore, we expect increased accuracy since, as compared to prior work using the Euclidean distance, we employ the geodesic distance in this work. The experimental results demonstrate the effectiveness of the proposed approach on pig heart CTA datasets.
Volume estimation of multidensity nodules with thoracic computed tomography.
Gavrielides, Marios A; Li, Qin; Zeng, Rongping; Myers, Kyle J; Sahiner, Berkman; Petrick, Nicholas
2016-01-01
This work focuses on volume estimation of "multidensity" lung nodules in a phantom computed tomography study. Eight objects were manufactured by enclosing spherical cores within larger spheres of double the diameter but with a different density. Different combinations of outer-shell/inner-core diameters and densities were created. The nodules were placed within an anthropomorphic phantom and scanned with various acquisition and reconstruction parameters. The volumes of the entire multidensity object as well as the inner core of the object were estimated using a model-based volume estimator. Results showed percent volume bias across all nodules and imaging protocols with slice thicknesses [Formula: see text] ranging from [Formula: see text] to 6.6% for the entire object (standard deviation ranged from 1.5% to 7.6%), and within [Formula: see text] to 5.7% for the inner-core measurement (standard deviation ranged from 2.0% to 17.7%). Overall, the estimation error was larger for the inner-core measurements, which was expected due to the smaller size of the core. Reconstructed slice thickness was found to substantially affect volumetric error for both tasks; exposure and reconstruction kernel were not. These findings provide information for understanding uncertainty in volumetry of nodules that include multiple densities such as ground glass opacities with a solid component. PMID:26844235
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
NASA Astrophysics Data System (ADS)
Granata, Daniele; Carnevale, Vincenzo
2016-08-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
ERIC Educational Resources Information Center
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
A technique for fast and accurate measurement of hand volumes using Archimedes' principle.
Hughes, S; Lau, J
2008-03-01
A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.
A technique for fast and accurate measurement of hand volumes using Archimedes' principle.
Hughes, S; Lau, J
2008-03-01
A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required. PMID:18488965
Kasabova, Boryana E; Holliday, Trenton W
2015-04-01
A new model for estimating human body surface area and body volume/mass from standard skeletal metrics is presented. This model is then tested against both 1) "independently estimated" body surface areas and "independently estimated" body volume/mass (both derived from anthropometric data) and 2) the cylindrical model of Ruff. The model is found to be more accurate in estimating both body surface area and body volume/mass than the cylindrical model, but it is more accurate in estimating body surface area than it is for estimating body volume/mass (as reflected by the standard error of the estimate when "independently estimated" surface area or volume/mass is regressed on estimates derived from the present model). Two practical applications of the model are tested. In the first test, the relative contribution of the limbs versus the trunk to the body's volume and surface area is compared between "heat-adapted" and "cold-adapted" populations. As expected, the "cold-adapted" group has significantly more of its body surface area and volume in its trunk than does the "heat-adapted" group. In the second test, we evaluate the effect of variation in bi-iliac breadth, elongated or foreshortened limbs, and differences in crural index on the body's surface area to volume ratio (SA:V). Results indicate that the effects of bi-iliac breadth on SA:V are substantial, while those of limb lengths and (especially) the crural index are minor, which suggests that factors other than surface area relative to volume are driving morphological variation and ecogeographical patterning in limb prorportions.
BEECH, D. J.; ROCHE, E. D.; SIBBONS, P. D.; ROSSDALE, P. D.; OUSEY, J. C.
2000-01-01
Mean glomerular volume has previously been estimated, using stereological techniques, specifically the point-sampled intercept (PSI), either from isotropic or from vertical sections. As glomeruli are approximately spherical structures, the same stereological technique was carried out on vertical and arbitrary sections to determine whether section orientation had any effect on mean glomerular volume estimation. Equine kidneys from 10 individuals were analysed using the PSI method of estimating volume-weighted mean glomerular volume (MGV); for each kidney, arbitrary and vertical sections were analysed. MGVs were not significantly different between arbitrary and vertical sections (P = 0.691) when analysing the data with the paired t test; when plotting MGV estimates from arbitrary sections against those from vertical sections the intercept was found not to be significantly different from zero (P > 0.8) and the slope of the regression line not to be significantly different from 1.0 (P > 0.4). For the estimation of MGV in equine kidneys using PSI, arbitrary sections may be used if it is not possible to use isotropic or vertical sections, but some caution must be exercised in the interpretation of results so gained. PMID:11005722
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Efficient and accurate estimation of relative order tensors from λ- maps
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Rishi; Miao, Xijiang; Shealy, Paul; Valafar, Homayoun
2009-06-01
The rapid increase in the availability of RDC data from multiple alignment media in recent years has necessitated the development of more sophisticated analyses that extract the RDC data's full information content. This article presents an analysis of the distribution of RDCs from two media (2D-RDC data), using the information obtained from a λ-map. This article also introduces an efficient algorithm, which leverages these findings to extract the order tensors for each alignment medium using unassigned RDC data in the absence of any structural information. The results of applying this 2D-RDC analysis method to synthetic and experimental data are reported in this article. The relative order tensor estimates obtained from the 2D-RDC analysis are compared to order tensors obtained from the program REDCAT after using assignment and structural information. The final comparisons indicate that the relative order tensors estimated from the unassigned 2D-RDC method very closely match the results from methods that require assignment and structural information. The presented method is successful even in cases with small datasets. The results of analyzing experimental RDC data for the protein 1P7E are presented to demonstrate the potential of the presented work in accurately estimating the principal order parameters from RDC data that incompletely sample the RDC space. In addition to the new algorithm, a discussion of the uniqueness of the solutions is presented; no more than two clusters of distinct solutions have been shown to satisfy each λ-map.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
Accurate estimation of the RMS emittance from single current amplifier data
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-05-31
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.
Quick and accurate estimation of the elastic constants using the minimum image method
NASA Astrophysics Data System (ADS)
Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.
2015-04-01
A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.
Pitfalls in accurate estimation of overdiagnosis: implications for screening policy and compliance.
Feig, Stephen A
2013-01-01
Stories in the public media that 30 to 50% of screen-detected breast cancers are overdiagnosed dissuade women from being screened because overdiagnosed cancers would never result in death if undetected yet do result in unnecessary treatment. However, such concerns are unwarranted because the frequency of overdiagnosis, when properly calculated, is only 0 to 5%. In the previous issue of Breast Cancer Research, Duffy and Parmar report that accurate estimation of the rate of overdiagnosis recognizes the effect of lead time on detection rates and the consequent requirement for an adequate number of years of follow-up. These indispensable elements were absent from highly publicized studies that overestimated the frequency of overdiagnosis.
Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks
NASA Astrophysics Data System (ADS)
Bouchaala, F.; Ali, M. Y.
2014-12-01
The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
mBEEF: An accurate semi-local Bayesian error estimation density functional
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas
2014-04-01
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.
Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge
NASA Astrophysics Data System (ADS)
Jacobsen, R. E.; Burr, D. M.
2016-09-01
Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.
NASA Astrophysics Data System (ADS)
Mittag, Anja; Lenz, Dominik; Smith, Paul J.; Pach, Susanne; Tarnok, Attila
2005-04-01
Aim: In patients, e.g. with congenital heart diseases, a differential blood count is needed for diagnosis. To this end by standard automatic analyzers 500 μl of blood is required from the patients. In case of newborns and infants this is a substantial volume, especially after operations associated with blood loss. Therefore, aim of this study was to develop a method to determine a differential blood picture with a substantially reduced specimen volume. Methods: To generate a differential blood picture 10 μl EDTA blood were mixed with 10 μl of a DRAQ5 solution (500μM, Biostatus) and 10 μl of an antibody mixture (CD45-FITC, CD14-PE, diluted with PBS). 20 μl of this cell suspension was filled into a Neubauer counting chamber. Due to the defined volume of the chamber it is possible to determine the cell count per volume. The trigger for leukocyte counting was set on DRAQ5 signal in order to be able to distinguish nucleated white blood cells from erythrocytes. Different leukocyte subsets could be distinguished due to the used fluorescence labeled antibodies. For erythrocyte counting cell suspension was diluted another 150 times. 20 μl of this dilution was analyzed in a microchamber by LSC with trigger set on forward scatter signal. Results: This method allows a substantial decrease of blood sample volume for generation of a differential blood picture (10 μl instead of 500μl). There was a high correlation between our method and the results of routine laboratory (r2=0.96, p<0.0001 n=40). For all parameters intra-assay variance was less than 7 %. Conclusions: In patients with low blood volume such as neonates and in critically ill infants every effort has to be taken to reduce the blood volume needed for diagnostics. With this method only 2% of standard sample volume is needed to generate a differential blood picture. Costs are below that of routine laboratory. We suggest this method to be established in paediatric cardiology for routine diagnostics and for
Uncertainties in peat volume and soil carbon estimated using ground penetrating radar and probing
Parsekian, Andrew D.; Slater, Lee; Ntarlagiannis, Dimitrios; Nolan, James; Sebestyen, Stephen D; Kolka, Randall K; Hanson, Paul J
2012-01-01
We evaluate the uncertainty in calculations of peat basin volume using high-resolution data . to resolve the three-dimensional structure of a peat basin using both direct (push probes) and indirect geophysical (ground penetrating radar) measurements. We compared volumetric estimates from both approaches with values from literature. We identified subsurface features that can introduce uncertainties into direct peat thickness measurements including the presence of woody peat and soft clay or gyttja. We demonstrate that a simple geophysical technique that is easily scalable to larger peatlands can be used to rapidly and cost effectively obtain more accurate and less uncertain estimates of peat basin volumes critical to improving understanding of the total terrestrial carbon pool in peatlands.
NASA Astrophysics Data System (ADS)
Browning, J.; Drymoni, K.; Gudmundsson, A.
2015-12-01
An understanding of the amount of magma available to supply any given eruption is useful for determining the potential eruption magnitude and duration. Geodetic measurements and inversion techniques are often used to constrain volume changes within magma chambers, as well as constrain location and depth, but such models are incapable of calculating total magma storage. For example, during the 2012 unrest period at Santorini volcano, approximately 0.021 km3 of new magma entered a shallow chamber residing at around 4 km below the surface. This type of event is not unusual, and is in fact a necessary condition for the formation of a long-lived shallow chamber, of which Santorini must possess. The period of unrest ended without culminating in eruption, i.e the amount of magma which entered the chamber was insufficient to break the chamber and force magma further towards the surface. We combine previously published data on the volume of recent eruptions at Santorini together with geodetic measurements. Measurements of dykes within the caldera wall provide an estimate of the volume of magma transported during eruptions, assuming the dyke does not become arrested. When the combined volume of a dyke and eruption are known (Ve) they can be used to estimate using fracture mechanics principles and poro-elastic constraints the size of an underlying shallow magma chamber. We present field measurements of dykes within Santorini caldera and provide an analytical method to estimate the volume of magma contained underneath Santorini caldera. In addition we postulate the potential volume of magma required as input from deeper sources to switch the shallow magma chamber from an equilibrium setting to one where the pressure inside the chamber exceeds the surrounding host rocks tensile strength, a condition necessary to form a dyke and a possible eruption.
Bioimpedance spectroscopy for the estimation of body fluid volumes in mice.
Chapman, M E; Hu, L; Plato, C F; Kohan, D E
2010-07-01
Conventional indicator dilution techniques for measuring body fluid volume are laborious, expensive, and highly invasive. Bioimpedance spectroscopy (BIS) may be a useful alternative due to being rapid, minimally invasive, and allowing repeated measurements. BIS has not been reported in mice; hence we examined how well BIS estimates body fluid volume in mice. Using C57/Bl6 mice, the BIS system demonstrated <5% intermouse variation in total body water (TBW) and extracellular (ECFV) and intracellular fluid volume (ICFV) between animals of similar body weight. TBW, ECFV, and ICFV differed between heavier male and lighter female mice; however, the ratio of TBW, ECFV, and ICFV to body weight did not differ between mice and corresponded closely to values in the literature. Furthermore, repeat measurements over 1 wk demonstrated <5% intramouse variation. Default resistance coefficients used by the BIS system, defined for rats, produced body composition values for TBW that exceeded body weight in mice. Therefore, body composition was measured in mice using a range of resistance coefficients. Resistance values at 10% of those defined for rats provided TBW, ECFV, and ICFV ratios to body weight that were similar to those obtained by conventional isotope dilution. Further evaluation of the sensitivity of the BIS system was determined by its ability to detect volume changes after saline infusion; saline provided the predicted changes in compartmental fluid volumes. In summary, BIS is a noninvasive and accurate method for the estimation of body composition in mice. The ability to perform serial measurements will be a useful tool for future studies.
Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)
1995-01-01
It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.
Gas Flaring Volume Estimates with Multiple Satellite Observations
NASA Astrophysics Data System (ADS)
Ziskin, D. C.; Elvidge, C.; Baugh, K.; Ghosh, T.; Hsu, F. C.
2010-12-01
Flammable gases (primarily methane) are a common bi-product associated with oil wells. Where there is no infrastructure to use the gas or bring it to market, the gases are typically flared off. This practice is more common at remote sites, such as an offshore drilling platform. The Defense Meteorological Satellite Program (DMSP) is a series of satellites with a low-light imager called the Operational Linescan System (OLS). The OLS, which detects the flares at night, has been a valuable tool in the estimation of flared gas volume [Elvidge et al, 2009]. The use of the Moderate Resolution Imaging Spectroradiometer (MODIS) fire product has been processed to create products suitable for an independent estimate of gas flaring on land. We are presenting the MODIS flare product, the results of our MODIS gas flare volume analysis, and independent validation of the published DMSP estimates. Elvidge, C. D., Ziskin, D., Baugh, K. E., Tuttle, B. T., Ghosh, T., Pack, D. W., Erwin, E. H., Zhizhin, M., 2009, "A Fifteen Year Record of Global Natural Gas Flaring Derived from Satellite Data", Energies, 2 (3), 595-622
NASA Astrophysics Data System (ADS)
Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi
2012-07-01
rates estimated using different kernel sizes were examined using the normalized mean-squared error of the estimated strain rate from the actual one obtained by the 1D phase-sensitive method. Compared with conventional kernel sizes, this result shows the possibility of the proposed correlation kernel to enable more accurate measurement of the strain rate. In in vivo measurement, the regional instantaneous velocities and strain rates in the radial direction of the heart wall were analyzed in detail at an extremely high temporal resolution (frame rate of 860 Hz). In this study, transition in contraction and relaxation was able to be detected by 2D tracking. These results indicate the potential of this method in the high-accuracy estimation of the strain rates and detailed analyses of the physiological function of the myocardium.
Validity of estimating limb muscle volume by bioelectrical impedance.
Miyatani, M; Kanehisa, H; Masuo, Y; Ito, M; Fukunaga, T
2001-07-01
The present study aimed to investigate the validity of estimating muscle volume by bioelectrical impedance analysis. Bioelectrical impedance and series cross-sectional images of the forearm, upper arm, lower leg, and thigh on the right side were determined in 22 healthy young adult men using a specially designed bioelectrical impedance acquisition system and magnetic resonance imaging (MRI) method, respectively. The impedance index (L(2)/Z) for every segment, calculated as the ratio of segment length squared to the impedance, was significantly correlated to the muscle volume measured by MRI, with r = 0.902-0.976 (P < 0.05). In these relationships, the SE of estimation was 38.4 cm(3) for the forearm, 40.9 cm(3) for the upper arm, 107.2 cm(3) for the lower leg, and 362.3 cm(3) for the thigh. Moreover, isometric torque developed in elbow flexion or extension and knee flexion or extension was significantly correlated to the L(2)/Z values of the upper arm and thigh, respectively, with correlation coefficients of 0.770-0.937 (P < 0.05), which differed insignificantly from those (0.799-0.958; P < 0.05) in the corresponding relationships with the muscle volume measured by MRI of elbow flexors or extensors and knee flexors or extensors. Thus the present study indicates that bioelectrical impedance analysis may be useful to predict the muscle volume and to investigate possible relations between muscle size and strength capability in a limited segment of the upper and lower limbs.
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained. PMID:17363231
Benchmarking of a New Finite Volume Shallow Water Code for Accurate Tsunami Modelling
NASA Astrophysics Data System (ADS)
Reis, Claudia; Clain, Stephane; Figueiredo, Jorge; Baptista, Maria Ana; Miranda, Jorge Miguel
2015-04-01
Finite volume methods used to solve the shallow-water equation with source terms receive great attention on the two last decades due to its fundamental properties: the built-in conservation property, the capacity to treat correctly discontinuities and the ability to handle complex bathymetry configurations preserving the some steady-state configuration (well-balanced scheme). Nevertheless, it is still a challenge to build an efficient numerical scheme, with very few numerical artifacts (e.g. numerical diffusion) which can be used in an operational environment, and are able to better capture the dynamics of the wet-dry interface and the physical phenomenon that occur in the inundation area. We present here a new finite volume code and benchmark it against analytical and experimental results, and we test the performance of the code in the complex topographic of the Tagus Estuary, close to Lisbon, Portugal. This work is funded by the Portugal-France research agreement, through the research project FCT-ANR/MAT-NAN/0122/2012.
Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.
2014-01-01
Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022
Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza
2016-01-01
Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that
Improvement in volume estimation from confocal sections after image deconvolution.
Difato, F; Mazzone, F; Scaglione, S; Fato, M; Beltrame, F; Kubínová, L; Janácek, J; Ramoino, P; Vicidomini, G; Diaspro, A
2004-06-01
The confocal microscope can image a specimen in its natural environment forming a 3D image of the whole structure by scanning it and collecting light through a small aperture (pinhole), allowing in vivo and in vitro observations. So far, the confocal fluorescence microscope (CFM) is considered a true volume imager because of the role of the pinhole that rejects information coming from out-of-focus planes. Unfortunately, intrinsic imaging properties of the optical scheme presently employed yield a corrupted image that can hamper quantitative analysis of successive image planes. By a post-image collection restoration, it is possible to obtain an estimate, with respect to a given optimization criterium, of the true object, utilizing the impulse response of system or Point Spread Function (PSF). The PSF can be measured or predicted so as to have a mathematical and physical model of the image-formation process. Further modelling and recording noise as an additive Gaussian process has used the regularized Iterative Constrained Tykhonov Miller (ICTM) restoration algorithm for solving the inverse problem. This algorithm finds the best estimate iteratively searching among the possible positive solutions; in the Fourier domain, such an approach is relatively fast and elegant. In order to compare the effective improvement in the quantitative image information analysis, we measured the volume of reference objects before and after image restoration, using the isotropic Fakir method.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
NASA Astrophysics Data System (ADS)
Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.
2013-12-01
layer results based on ';manual' sampling. The closest flux approximation was obtained using the river width-dependent model. The higher fluxes obtained by the chambers could partially be explained by an enhanced turbulence created in the chambers themselves, especially because the ratio between the water surface area and chamber volume was rather small. The high resolution combined sampling approach helped constrain K and determine which river model best fits Aare River emissions. This experimental setup ultimately allows us to (1) define the dependence of K, (2) measure CH4 and CO2 fluxes from the main river and different tributaries more accurately, (3) estimate more spatially-resolved fluxes via either models or water sampling and the newly found K, and (4) determine one of the fates of carbon in the Aare River.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Vonderhaar, T. H.; Johnson, L. R.; Laybe, P.; Reinke, D.
1987-01-01
The analysis of 18 convective clusters demonstrates that the extension of the Area-Time-Integral (ATI) technique to the use of satellite data is possible. The differences of the internal structures of the radar reflectivity features, and of the satellite features, give rise to differences in estimating rain volumes by delineating area; however, by focusing upon the area integrated over the lifetime of the storm, it is suggested that some of the errors produced by the differences in the cloud geometries as viewed by radar or satellite are minimized. The results are good and future developments should consider data from different climatic regions and should allow for implementation of the technique in a general circulation model.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
The feasibility of rain volume estimation over fixed and floating areas was investigated using rapid scan satellite data following a technique recently developed with radar data, called the Area Time Integral (ATI) technique. The radar and rapid scan GOES satellite data were collected during the Cooperative Convective Precipitation Experiment (CCOPE) and North Dakota Cloud Modification Project (NDCMP). Six multicell clusters and cells were analyzed to the present time. A two-cycle oscillation emphasizing the multicell character of the clusters is demonstrated. Three clusters were selected on each day, 12 June and 2 July. The 12 June clusters occurred during the daytime, while the 2 July clusters during the nighttime. A total of 86 time steps of radar and 79 time steps of satellite images were analyzed. There were approximately 12-min time intervals between radar scans on the average.
Unmanned Aerial Vehicle Use for Wood Chips Pile Volume Estimation
NASA Astrophysics Data System (ADS)
Mokroš, M.; Tabačák, M.; Lieskovský, M.; Fabrika, M.
2016-06-01
The rapid development of unmanned aerial vehicles is a challenge for applied research. Many technologies are developed and then researcher are looking up for their application in different sectors. Therefore, we decided to verify the use of the unmanned aerial vehicle for wood chips pile monitoring. We compared the use of GNSS device and unmanned aerial vehicle for volume estimation of four wood chips piles. We used DJI Phantom 3 Professional with the built-in camera and GNSS device (geoexplorer 6000). We used Agisoft photoscan for processing photos and ArcGIS for processing points. Volumes calculated from pictures were not statistically significantly different from amounts calculated from GNSS data and high correlation between them was found (p = 0.9993). We conclude that the use of unmanned aerial vehicle instead of the GNSS device does not lead to significantly different results. Tthe data collection consumed from almost 12 to 20 times less time with the use of UAV. Additionally, UAV provides documentation trough orthomosaic.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.
Moore, James; Hays, David; Quinn, John; Johnson, Robert; Durham, Lisa
2013-07-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations. PMID:27109554
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.
Plethysmographic estimation of thoracic gas volume in apneic mice.
Jánosi, Tibor Z; Adamicza, Agnes; Zosky, Graeme R; Asztalos, Tibor; Sly, Peter D; Hantos, Zoltán
2006-08-01
Electrical stimulation of intercostal muscles was employed to measure thoracic gas volume (TGV) during airway occlusion in the absence of respiratory effort at different levels of lung inflation. In 15 tracheostomized and mechanically ventilated CBA/Ca mice, the value of TGV obtained from the spontaneous breathing effort available in the early phase of the experiments (TGVsp) was compared with those resulting from muscle stimulation (TGVst) at transrespiratory pressures of 0, 10, and 20 cmH2O. A very strong correlation (r2= 0.97) was found, although with a systematically (approximately 16%) higher estimation of TGVst relative to TGVsp, attributable to the different durations of the stimulated (approximately 50 ms) and spontaneous (approximately 200 ms) contractions. Measurements of TGVst before and after injections of 0.2, 0.4, and 0.6 ml of nitrogen into the lungs in six mice resulted in good agreement between the change in TGVst and the injected volume (r2= 0.98). In four mice, TGVsp and TGVst were compared at end expiration with air or a helium-oxygen mixture to confirm the validity of isothermal compression in the alveolar gas. The TGVst values measured at zero transrespiratory pressure in all CBA/Ca mice [0.29 +/- 0.05 (SD) ml] and in C57BL/6 (N = 6; 0.34 +/- 0.08 ml) and BALB/c (N = 6; 0.28 +/- 0.06 ml) mice were in agreement with functional residual capacity values from previous studies in which different techniques were used. This method is particularly useful when TGV is to be determined in the absence of breathing activity, when it must be known at any level of lung inflation or under non-steady-state conditions, such as during pharmaceutical interventions. PMID:16645196
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
NASA Astrophysics Data System (ADS)
Crouch, Stephen; Kaylor, Brant M.; Barber, Zeb W.; Reibel, Randy R.
2015-09-01
Currently large volume, high accuracy three-dimensional (3D) metrology is dominated by laser trackers, which typically utilize a laser scanner and cooperative reflector to estimate points on a given surface. The dependency upon the placement of cooperative targets dramatically inhibits the speed at which metrology can be conducted. To increase speed, laser scanners or structured illumination systems can be used directly on the surface of interest. Both approaches are restricted in their axial and lateral resolution at longer stand-off distances due to the diffraction limit of the optics used. Holographic aperture ladar (HAL) and synthetic aperture ladar (SAL) can enhance the lateral resolution of an imaging system by synthesizing much larger apertures by digitally combining measurements from multiple smaller apertures. Both of these approaches only produce two-dimensional imagery and are therefore not suitable for large volume 3D metrology. We combined the SAL and HAL approaches to create a swept frequency digital holographic 3D imaging system that provides rapid measurement speed for surface coverage with unprecedented axial and lateral resolution at longer standoff ranges. The technique yields a "data cube" of Fourier domain data, which can be processed with a 3D Fourier transform to reveal a 3D estimate of the surface. In this paper, we provide the theoretical background for the technique and show experimental results based on an ultra-wideband frequency modulated continuous wave (FMCW) chirped heterodyne ranging system showing ~100 micron lateral and axial precisions at >2 m standoff distances.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less
Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2016-06-01
The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.
The GFR and GFR decline cannot be accurately estimated in type 2 diabetics.
Gaspari, Flavio; Ruggenenti, Piero; Porrini, Esteban; Motterlini, Nicola; Cannata, Antonio; Carrara, Fabiola; Jiménez Sosa, Alejandro; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Parvanova, Aneliya; Iliev, Ilian; Trevisan, Roberto; Bossi, Antonio; Zaletel, Jelka; Remuzzi, Giuseppe
2013-07-01
There are no adequate studies that have formally tested the performance of different estimating formulas in patients with type 2 diabetes both with and without overt nephropathy. Here we evaluated the agreement between baseline GFRs, GFR changes at month 6, and long-term GFR decline measured by iohexol plasma clearance or estimated by 15 creatinine-based formulas in 600 type 2 diabetics followed for a median of 4.0 years. Ninety patients were hyperfiltering. The number of those identified by estimation formulas ranged from 0 to 24:58 were not identified by any formula. Baseline GFR was significantly underestimated and a 6-month GFR reduction was missed in hyperfiltering patients. Long-term GFR decline was also underestimated by all formulas in the whole study group and in hyper-, normo-, and hypofiltering patients considered separately. Five formulas generated positive slopes in hyperfiltering patients. Baseline concordance correlation coefficients and total deviation indexes ranged from 32.1% to 92.6% and from 0.21 to 0.53, respectively. Concordance correlation coefficients between estimated and measured long-term GFR decline ranged from -0.21 to 0.35. The agreement between estimated and measured values was also poor within each subgroup considered separately. Thus, our study questions the use of any estimation formula to identify hyperfiltering patients and monitor renal disease progression and response to treatment in type 2 diabetics without overt nephropathy.
Performance benchmarking of liver CT image segmentation and volume estimation
NASA Astrophysics Data System (ADS)
Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang
2008-03-01
In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
Some recommendations for an accurate estimation of Lanice conchilega density based on tube counts
NASA Astrophysics Data System (ADS)
van Hoey, Gert; Vincx, Magda; Degraer, Steven
2006-12-01
The tube building polychaete Lanice conchilega is a common and ecologically important species in intertidal and shallow subtidal sands. It builds a characteristic tube with ragged fringes and can retract rapidly into its tube to depths of more than 20 cm. Therefore, it is very difficult to sample L. conchilega individuals, especially with a Van Veen grab. Consequently, many studies have used tube counts as estimates of real densities. This study reports on some aspects to be considered when using tube counts as a density estimate of L. conchilega, based on intertidal and subtidal samples. Due to its accuracy and independence of sampling depth, the tube method is considered the prime method to estimate the density of L. conchilega. However, caution is needed when analyzing samples with fragile young individuals and samples from areas where temporary physical disturbance is likely to occur.
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
Spectral estimation from laser scanner data for accurate color rendering of objects
NASA Astrophysics Data System (ADS)
Baribeau, Rejean
2002-06-01
Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.
Estimating Marine Aerosol Particle Volume and Number from Maritime Aerosol Network Data
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Smirnov, A.; Hsu, N. C.; Munchak, L. A.; Holben, B. N.
2012-01-01
As well as spectral aerosol optical depth (AOD), aerosol composition and concentration (number, volume, or mass) are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN) cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET) inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS) data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The average solution MODIS dataset agrees more closely with MAN than the best solution dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data.
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Takao, Seishin; Tadano, Shigeru; Taguchi, Hiroshi; Yasuda, Koichi; Onimaru, Rikiya; Ishikawa, Masayori; Bengua, Gerard; Suzuki, Ryusuke; Shirato, Hiroki
2011-11-01
Purpose: To establish a method for the accurate acquisition and analysis of the variations in tumor volume, location, and three-dimensional (3D) shape of tumors during radiotherapy in the era of image-guided radiotherapy. Methods and Materials: Finite element models of lymph nodes were developed based on computed tomography (CT) images taken before the start of treatment and every week during the treatment period. A surface geometry map with a volumetric scale was adopted and used for the analysis. Six metastatic cervical lymph nodes, 3.5 to 55.1 cm{sup 3} before treatment, in 6 patients with head and neck carcinomas were analyzed in this study. Three fiducial markers implanted in mouthpieces were used for the fusion of CT images. Changes in the location of the lymph nodes were measured on the basis of these fiducial markers. Results: The surface geometry maps showed convex regions in red and concave regions in blue to ensure that the characteristics of the 3D tumor geometries are simply understood visually. After the irradiation of 66 to 70 Gy in 2 Gy daily doses, the patterns of the colors had not changed significantly, and the maps before and during treatment were strongly correlated (average correlation coefficient was 0.808), suggesting that the tumors shrank uniformly, maintaining the original characteristics of the shapes in all 6 patients. The movement of the gravitational center of the lymph nodes during the treatment period was everywhere less than {+-}5 mm except in 1 patient, in whom the change reached nearly 10 mm. Conclusions: The surface geometry map was useful for an accurate evaluation of the changes in volume and 3D shapes of metastatic lymph nodes. The fusion of the initial and follow-up CT images based on fiducial markers enabled an analysis of changes in the location of the targets. Metastatic cervical lymph nodes in patients were suggested to decrease in size without significant changes in the 3D shape during radiotherapy. The movements of the
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
NASA Astrophysics Data System (ADS)
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Cumulative Ocean Volume Estimates of the Solar System
NASA Astrophysics Data System (ADS)
Frank, E. A.; Mojzsis, S. J.
2010-12-01
Although there has been much consideration for habitability in silicate planets and icy bodies, this information has never been quantitatively gathered into a single approximation encompassing our solar system from star to cometary halo. Here we present an estimate for the total habitable volume of the solar system by constraining our definition of habitable environments to those to which terrestrial microbial extremophiles could theoretically be transplanted and yet survive. The documented terrestrial extremophile inventory stretches environmental constraints for habitable temperature and pH space of T ~ -15oC to 121oC and pH ~ 0 to 13.5, salinities >35% NaCl, and gamma radiation doses of 10,000 to 11,000 grays [1]. Pressure is likely not a limiting factor to life [2]. We applied these criteria in our analysis of the geophysical habitable potential of the icy satellites and small icy bodies. Given the broad spectrum of environmental tolerance, we are optimistic that our pessimistic estimates are conservative. Beyond the reaches of our inner solar system's conventional habitable zone (Earth, Mars and perhaps Venus) is Ceres, a dwarf planet in the habitable zone that could possess a significant liquid water ocean if that water contains anti-freezing species [3]. Yet further out, Europa is a small icy satellite that has generated much excitement for astrobiological potential due to its putative subsurface liquid water ocean. It is widely promulgated that the icy moons Enceladus, Triton, Callisto, Ganymede, and Titan likewise have also sustained liquid water oceans. If oceans in Europa, Enceladus, and Triton have direct contact with a rocky mantle hot enough to melt, hydrothermal vents could provide an energy source for chemotrophic organisms. Although oceans in the remaining icy satellites may be wedged between two layers of ice, their potential for life cannot be precluded. Relative to the Jovian style of icy satellites, trans-neptunian objects (TNOs) - icy bodies
Danjon, Frédéric; Caplan, Joshua S; Fortin, Mathieu; Meredieu, Céline
2013-01-01
Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately.
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
Techniques for accurate estimation of net discharge in a tidal channel
Simpson, Michael R.; Bland, Roger
1999-01-01
An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.
Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras
Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew
2011-01-01
Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501
Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan
2009-01-01
In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Endres, M I; Lobeck-Luchterhand, K M; Espejo, L A; Tucker, C B
2014-01-01
Dairy welfare assessment programs are becoming more common on US farms. Outcome-based measurements, such as locomotion, hock lesion, hygiene, and body condition scores (BCS), are included in these assessments. The objective of the current study was to investigate the proportion of cows in the pen or subsamples of pens on a farm needed to provide an accurate estimate of the previously mentioned measurements. In experiment 1, we evaluated cows in 52 high pens (50 farms) for lameness using a 1- to 5-scale locomotion scoring system (1 = normal and 5 = severely lame; 24.4 and 6% of animals were scored ≥ 3 or ≥ 4, respectively). Cows were also given a BCS using a 1- to 5-scale, where 1 = emaciated and 5 = obese; cows were rarely thin (BCS ≤ 2; 0.10% of cows) or fat (BCS ≥ 4; 0.11% of cows). Hygiene scores were assessed on a 1- to 5-scale with 1 = clean and 5 = severely dirty; 54.9% of cows had a hygiene score ≥ 3. Hock injuries were classified as 1 = no lesion, 2 = mild lesion, and 3 = severe lesion; 10.6% of cows had a score of 3. Subsets of data were created with 10 replicates of random sampling that represented 100, 90, 80, 70, 60, 50, 40, 30, 20, 15, 10, 5, and 3% of the cows measured/pen. In experiment 2, we scored the same outcome measures on all cows in lactating pens from 12 farms and evaluated using pen subsamples: high; high and fresh; high, fresh, and hospital; and high, low, and hospital. For both experiments, the association between the estimates derived from all subsamples and entire pen (experiment 1) or herd (experiment 2) prevalence was evaluated using linear regression. To be considered a good estimate, 3 criteria must be met: R(2)>0.9, slope = 1, and intercept = 0. In experiment 1, on average, recording 15% of the pen represented the percentage of clinically lame cows (score ≥ 3), whereas 30% needed to be measured to estimate severe lameness (score ≥ 4). Only 15% of the pen was needed to estimate the percentage of the herd with a hygiene
Strømmen, Kenneth; Stormark, Tor André; Iversen, Bjarne M; Matre, Knut
2004-09-01
To evaluate the accuracy of small volume estimation, both in vivo and in vitro, measurements with a three-dimensional (3D) ultrasound (US) system were carried out. A position sensor was used and the transmitting frequency was 10 MHz. Balloons with known volumes were scanned while rat kidneys were scanned in vivo and in vitro. The Archimedes' principle was used to estimate the true volume. For balloons, the 3D US system gave very good agreement with true volumes in the volume range 0.1 to 10.0 mL (r = 0.999, n = 45, mean difference +/- 2SD = 0.245 +/- 0.370 mL). For rat kidneys in vivo (volume range 0.6 to 2.7 mL) the method was less accurate (r = 0.800, n = 10, mean difference +/- 2SD = -0.288 +/- 0.676 mL). For rat kidneys in vitro (volume range 0.3 to 2.7 mL) the results showed good agreement (r = 0.981, n = 23, mean difference +/- 2SD = 0.039 +/- 0.254 mL). For balloons, kidneys in vivo and in vitro, the mean percentage error was 9.3 +/- 4.8%, -17.1 +/- 17.4%, and 4.6 +/- 11.5%, respectively. This method can estimate the volume of small phantoms and rat kidneys and opens new possibilities for volume measurements of small objects and the study of organ function in small animals. (E-mail ). PMID:15550315
Luo, Xiongbiao
2014-06-15
electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.
Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes
Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.
Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2012-01-01
In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218
Deepwater Horizon - Estimating surface oil volume distribution in real time
NASA Astrophysics Data System (ADS)
Lehr, B.; Simecek-Beatty, D.; Leifer, I.
2011-12-01
Spill responders to the Deepwater Horizon (DWH) oil spill required both the relative spatial distribution and total oil volume of the surface oil. The former was needed on a daily basis to plan and direct local surface recovery and treatment operations. The latter was needed less frequently to provide information for strategic response planning. Unfortunately, the standard spill observation methods were inadequate for an oil spill this size, and new, experimental, methods, were not ready to meet the operational demands of near real-time results. Traditional surface oil estimation tools for large spills include satellite-based sensors to define the spatial extent (but not thickness) of the oil, complemented with trained observers in small aircraft, sometimes supplemented by active or passive remote sensing equipment, to determine surface percent coverage of the 'thick' part of the slick, where the vast majority of the surface oil exists. These tools were also applied to DWH in the early days of the spill but the shear size of the spill prevented synoptic information of the surface slick through the use small aircraft. Also, satellite images of the spill, while large in number, varied considerably in image quality, requiring skilled interpretation of them to identify oil and eliminate false positives. Qualified staff to perform this task were soon in short supply. However, large spills are often events that overcome organizational inertia to the use of new technology. Two prime examples in DWH were the application of hyper-spectral scans from a high-altitude aircraft and more traditional fixed-wing aircraft using multi-spectral scans processed by use of a neural network to determine, respectively, absolute or relative oil thickness. But, with new technology, come new challenges. The hyper-spectral instrument required special viewing conditions that were not present on a daily basis and analysis infrastructure to process the data that was not available at the command
NASA Technical Reports Server (NTRS)
Ferguson, Connor R.; Lee, Stuart M. C.; Stenger, Michael B.; Platts, Steven H.; Laurie, Steven S.
2014-01-01
Orthostatic intolerance affects 60-80% of astronauts returning from long-duration missions, representing a significant risk to completing mission-critical tasks. While likely multifactorial, a reduction in stroke volume (SV) represents one factor contributing to orthostatic intolerance during stand and head up tilt (HUT) tests. Current measures of SV during stand or HUT tests use Doppler ultrasound and require a trained operator and specialized equipment, restricting its use in the field. BeatScope (Finapres Medical Systems BV, The Netherlands) uses a modelflow algorithm to estimate SV from continuous blood pressure waveforms in supine subjects; however, evidence supporting the use of Modelflow to estimate SV in subjects completing stand or HUT tests remain scarce. Furthermore, because the blood pressure device is held extended at heart level during HUT tests, but allowed to rest at the side during stand tests, changes in the finger arterial pressure waveform resulting from arm positioning could alter modelflow estimated SV. The purpose of this project was to compare Doppler ultrasound and BeatScope estimations of SV to determine if BeatScope can be used during stand or HUT tests. Finger photoplethysmography was used to acquire arterial pressure waveforms corrected for hydrostatic finger-to-heart height using the Finometer (FM) and Portapres (PP) arterial pressure devices in 10 subjects (5 men and 5 women) during a stand test while simultaneous estimates of SV were collected using Doppler ultrasound. Measures were made after 5 minutes of supine rest and while subjects stood for 5 minutes. Next, SV estimates were reacquired while each arm was independently raised to heart level, a position similar to tilt testing. Supine SV estimates were not significantly different between all three devices (FM: 68+/-20, PP: 71+/-21, US: 73+/-21 ml/beat). Upon standing, the change in SV estimated by FM (-18+/-8 ml) was not different from PP (-21+/-12), but both were significantly
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
NASA Astrophysics Data System (ADS)
Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick
2009-06-01
When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.
Using GIS to Estimate Lake Volume from Limited Data (Lake and Reservoir Management)
Estimates of lake volume are necessary for calculating residence time and modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of...
Driver, Nancy E.; Tasker, Gary D.
1990-01-01
Urban planners and managers need information on the quantity of precipitation and the quality and quantity of run off in their cities and towns if they are to adequately plan for the effects of storm runoff from urban areas. As a result of this need, four sets of linear regression models were developed for estimating storm-runoff constituent loads, storm-runoff volumes, storm-runoff mean concentrations of constituents, and mean seasonal or mean annual constituent loads from physical, land-use, and climatic characteristics of urban watersheds in the United States. Thirty-four regression models of storm-runoff constituent loads and storm-runoff volumes were developed, and 31 models of storm-runoff mean concentrations were developed . Ten models of mean seasonal or mean annual constituent loads were developed by analyzing long-term storm-rainfall records using at-site linear regression models. Three statistically different regions, delineated on the basis of mean annual rainfall, were used to improve linear regression models where adequate data were available . Multiple regression analyses, including ordinary least squares and generalized least squares, were used to determine the optimum linear regression models . These models can be used to estimate storm-runoff constituent loads, storm-runoff volumes, storm-runoff mean concentrations of constituents, and mean seasonal or mean annual constituent loads at gaged and ungaged urban watersheds. The most significant explanatory variables in all linear regression models were total storm rainfall and total contributing drainage area. Impervious area, land-use, and mean annual climatic characteristics also were significant in some models. Models for estimating loads of dissolved solids, total nitrogen, and total ammonia plus organic nitrogen as nitrogen generally were the most accurate, whereas models for suspended solids were the least accurate. The most accurate models were those for application in the more arid Western
Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.
NASA Astrophysics Data System (ADS)
Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke
2013-04-01
temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules
Yamagishi, Junya; Okimoto, Noriaki; Morimoto, Gentaro; Taiji, Makoto
2014-11-01
The Poisson-Boltzmann implicit solvent (PB) is widely used to estimate the solvation free energies of biomolecules in molecular simulations. An optimized set of atomic radii (PB radii) is an important parameter for PB calculations, which determines the distribution of dielectric constants around the solute. We here present new PB radii for the AMBER protein force field to accurately reproduce the solvation free energies obtained from explicit solvent simulations. The presented PB radii were optimized using results from explicit solvent simulations of the large systems. In addition, we discriminated PB radii for N- and C-terminal residues from those for nonterminal residues. The performances using our PB radii showed high accuracy for the estimation of solvation free energies at the level of the molecular fragment. The obtained PB radii are effective for the detailed analysis of the solvation effects of biomolecules.
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
NASA Astrophysics Data System (ADS)
Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.
2011-12-01
Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.
System Development of Estimated Figures of Volume Production Plan
ERIC Educational Resources Information Center
Brazhnikov, Maksim A.; Khorina, Irina V.; Minina, Yulia I.; Kolyasnikova, Lyudmila V.; Streltsov, Aleksey V.
2016-01-01
The relevance of this problem is primarily determined by a necessity of improving production efficiency in conditions of innovative development of the economy and implementation of Import Substitution Program. The purpose of the article is development of set of criteria and procedures for the comparative assessment of alternative volume production…
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
Glacier volume estimation of Cascade Volcanoes—an analysis and comparison with other methods
Driedger, Carolyn L.; Kennard, P.M.
1986-01-01
During the 1980 eruption of Mount St. Helens, the occurrence of floods and mudflows made apparent a need to assess mudflow hazards on other Cascade volcanoes. A basic requirement for such analysis is information about the volume and distribution of snow and ice on these volcanoes. An analysis was made of the volume-estimation methods developed by previous authors and a volume estimation method was developed for use in the Cascade Range. A radio echo-sounder, carried in a backpack, was used to make point measurements of ice thickness on major glaciers of four Cascade volcanoes (Mount Rainier, Washington; Mount Hood and the Three Sisters, Oregon; and Mount Shasta, California). These data were used to generate ice-thickness maps and bedrock topographic maps for developing and testing volume-estimation methods. Subsequently, the methods were applied to the unmeasured glaciers on those mountains and, as a test of the geographical extent of applicability, to glaciers beyond the Cascades having measured volumes. Two empirical relationships were required in order to predict volumes for all the glaciers. Generally, for glaciers less than 2.6 km in length, volume was found to be estimated best by using glacier area, raised to a power. For longer glaciers, volume was found to be estimated best by using a power law relationship, including slope and shear stress. The necessary variables can be estimated from topographic maps and aerial photographs.
Preoperative TRAM free flap volume estimation for breast reconstruction in lean patients.
Minn, Kyung Won; Hong, Ki Yong; Lee, Sang Woo
2010-04-01
To obtain pleasing symmetry in breast reconstruction with transverse rectus abdominis myocutaneous (TRAM) free flap, a large amount of abdominal flap is elevated and remnant tissue is trimmed in most cases. However, elevation of abundant abdominal flap can cause excessive tension in donor site closure and increase the possibility of hypertrophic scarring especially in lean patients. The TRAM flap was divided into 4 zones in routine manner; the depth and dimension of the 4 zones were obtained using ultrasound and AutoCAD (Autodesk Inc., San Rafael, CA), respectively. The acquired numbers were then multiplied to obtain an estimate of volume of each zone and the each zone volume was added. To confirm the relation between the estimated volume and the actual volume, authors compared intraoperative actual TRAM flap volumes with preoperative estimated volumes in 30 consecutive TRAM free flap breast reconstructions. The estimated volumes and the actual elevated volumes of flap were found to be correlated by regression analysis (r = 0.9258, P < 0.01). According to this result, we could confirm the reliability of the preoperative volume estimation using our method. Afterward, the authors applied this method to 7 lean patients by estimation and revision of the design and obtained symmetric results with minimal donor site morbidity. Preoperative estimation of TRAM flap volume with ultrasound and AutoCAD (Autodesk Inc.) allow the authors to attain the precise volume desired for elevation. This method provides advantages in terms of minimal flap trimming, easier closure of donor sites, reduced scar widening and symmetry, especially in lean patients.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
A method of estimating flood volumes in western Kansas
Perry, C.A.
1984-01-01
Relationships between flood volume and peak discharge in western Kansas were developed considering basin and climatic characteristics in order to evaluate the availability of surface water in the area. Multiple-regression analyses revealed a relationship between flood volume, peak discharge, channel slope , and storm duration for basins smaller than 1,503 square miles. The equation VOL=0.536 PEAK1.71 SLOPE-0.85 DUR0.24, had a correlation coefficient of R=0.94 and a standard error of 0.33 log units (-53 and +113 percent). A better relationship for basins smaller than 228 square miles resulted in the equation VOL=0.483 PEAK0.98 SLOPE-0.74 AREA0.30, which had a correlation coefficient of R=0.90 and a standard error of 0.23 log units (-41 and +70 percent). (USGS)
NASA Astrophysics Data System (ADS)
Iyatomi, Hitoshi; Hashimoto, Jun; Yoshii, Fumuhito; Kazama, Toshiki; Kawada, Shuichi; Imai, Yutaka
2014-03-01
Discrimination between Alzheimer's disease and other dementia is clinically significant, however it is often difficult. In this study, we developed classification models among Alzheimer's disease (AD), other dementia (OD) and/or normal subjects (NC) using patient factors and indices obtained by brain perfusion SPECT. SPECT is commonly used to assess cerebral blood flow (CBF) and allows the evaluation of the severity of hypoperfusion by introducing statistical parametric mapping (SPM). We investigated a total of 150 cases (50 cases each for AD, OD, and NC) from Tokai University Hospital, Japan. In each case, we obtained a total of 127 candidate parameters from: (A) 2 patient factors (age and sex), (B) 12 CBF parameters and 113 SPM parameters including (C) 3 from specific volume analysis (SVA), and (D) 110 from voxel-based analysis stereotactic extraction estimation (vbSEE). We built linear classifiers with a statistical stepwise feature selection and evaluated the performance with the leave-one-out cross validation strategy. Our classifiers achieved very high classification performances with reasonable number of selected parameters. In the most significant discrimination in clinical, namely those of AD from OD, our classifier achieved both sensitivity (SE) and specificity (SP) of 96%. In a similar way, our classifiers achieved a SE of 90% and a SP of 98% in AD from NC, as well as a SE of 88% and a SP of 86% in AD from OD and NC cases. Introducing SPM indices such as SVA and vbSEE, classification performances improved around 7-15%. We confirmed that these SPM factors are quite important for diagnosing Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Arroyo, Renaldo Josue Salazar
The Mississippi Institute for Forest Inventory (MIFI) is the only cost-effective large-scale forest inventory system in the United States with sufficient precision for producing reliable volume/weight/biomass estimates for small working circle areas (procurement areas). When forest industry is recruited to Mississippi, proposed working circles may overlap existing boundaries of bordering states leaving a gap of inventory information, and a remote sensing-based system for augmenting missing ground inventory data is desirable. The feasibility of obtaining acceptable cubic foot volume estimates from a Landsat-derived volume estimation model (Wilkinson 2011) was assessed by: 1) an initial study to temporally validate Landsat-derived cubic foot volume outside bark to a pulpwood top estimates in comparison with MIFI ground truth inventory plot estimates at two separate time periods, and 2) re-developing a regression model based on remotely sensed imagery in combination with available MIFI plot data. Initial results failed to confirm the relationships shown in past research between radiance values and volume estimation. The complete lack of influence of radiance values in the model led to a re-assessment of volume estimation schemes. Data outlier trimming manipulation was discovered to lead to false relationships with radiance values reported in past research. Two revised volume estimation models using age, average stand height, and trees per-acre and age and height alone as independent variables were found sufficient to explain variation of volume across the image. These results were used to develop a procedure for other remote sensing technologies that could produce data with sufficient precision for volume estimation where inventory data are sparse or non-existent.
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
Space tug economic analysis study. Volume 3: Cost estimates
NASA Technical Reports Server (NTRS)
1972-01-01
Cost estimates for the space tug operation are presented. The subjects discussed are: (1) research and development costs, (2) investment costs, (3) operations costs, and (4) funding requirements. The emphasis is placed on the single stage tug configuration using various types of liquid propellants.
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
NASA Astrophysics Data System (ADS)
Montes-Hugo, M.; Bouakba, H.; Arnone, R.
2014-06-01
The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.
Multi-model ensemble estimation of volume transport through the straits of the East/Japan Sea
NASA Astrophysics Data System (ADS)
Han, Sooyeon; Hirose, Naoki; Usui, Norihisa; Miyazawa, Yasumasa
2016-01-01
The volume transports measured at the Korea/Tsushima, Tsugaru, and Soya/La Perouse Straits remain quantitatively inconsistent. However, data assimilation models at least provide a self-consistent budget despite subtle differences among the models. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate more accurately transport at these straits by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.43 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.
Budget estimates: Fiscal year 1994. Volume 2: Construction of facilities
NASA Technical Reports Server (NTRS)
1994-01-01
The Construction of Facilities (CoF) appropriation provides contractual services for the repair, rehabilitation, and modification of existing facilities; the construction of new facilities and the acquisition of related collateral equipment; the acquisition or condemnation of real property; environmental compliance and restoration activities; the design of facilities projects; and advanced planning related to future facilities needs. Fiscal year 1994 budget estimates are broken down according to facility location of project and by purpose.
Estimation of Surface Area and Volume of a Nematode from Morphometric Data
Brown, Simon; Pedley, Kevin C.; Simcock, David C.
2016-01-01
Nematode volume and surface area are usually based on the inappropriate assumption that the animal is cylindrical. While nematodes are approximately circular in cross section, the radius varies longitudinally. We use standard morphometric data to obtain improved estimates of volume and surface area based on (i) a geometrical approach and (ii) a Bézier representation of the nematode. These new estimators require only the morphometric data available from Cobb's ratios, but if fewer coordinates are available the geometric approach reduces to the standard estimates. Consequently, these new estimators are better than the standard alternatives. PMID:27110427
Lunar Architecture Team - Phase 2 Habitat Volume Estimation: "Caution When Using Analogs"
NASA Technical Reports Server (NTRS)
Rudisill, Marianne; Howard, Robert; Griffin, Brand; Green, Jennifer; Toups, Larry; Kennedy, Kriss
2008-01-01
The lunar surface habitat will serve as the astronauts' home on the moon, providing a pressurized facility for all crew living functions and serving as the primary location for a number of crew work functions. Adequate volume is required for each of these functions in addition to that devoted to housing the habitat systems and crew consumables. The time constraints of the LAT-2 schedule precluded the Habitation Team from conducting a complete "bottoms-up" design of a lunar surface habitation system from which to derive true volumetric requirements. The objective of this analysis was to quickly derive an estimated total pressurized volume and pressurized net habitable volume per crewmember for a lunar surface habitat, using a principled, methodical approach in the absence of a detailed design. Five "heuristic methods" were used: historical spacecraft volumes, human/spacecraft integration standards and design guidance, Earth-based analogs, parametric "sizing" tools, and conceptual point designs. Estimates for total pressurized volume, total habitable volume, and volume per crewmember were derived using these methods. All method were found to provide some basis for volume estimates, but values were highly variable across a wide range, with no obvious convergence of values. Best current assumptions for required crew volume were provided as a range. Results of these analyses and future work are discussed.
NASA Astrophysics Data System (ADS)
Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.
2006-03-01
In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.
Danjon, Frédéric; Caplan, Joshua S.; Fortin, Mathieu; Meredieu, Céline
2013-01-01
Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately. PMID
Volume-based thermodynamics: estimations for 2:2 salts.
Jenkins, H Donald Brooke; Glasser, Leslie
2006-02-20
The lattice energy of an ionic crystal, U(POT), can be expressed as a linear function of the inverse cube root of its formula unit volume (i.e., Vm(-1/3)); thus, U(POT) approximately 2I(alpha/Vm(1/3) + beta), where alpha and beta are fitted constants and I is the readily calculated ionic strength factor of the lattice. The standard entropy, S, is a linear function of Vm itself: S approximately kVm + c, with fitted constants k and c. The constants alpha and beta have previously been evaluated for salts with charge ratios of 1:1, 1:2, and 2:1 and for the general case q:p, while values of k and c applicable to ionic solids generally have earlier been reported. In this paper, we obtain alpha and beta, k and c, specifically for 2:2 salts (by studying the ionic oxides, sulfates, and carbonates), finding that U(POT)[MX 2:2]/(kJ mol(-1)) approximately 8(119/Vm(1/3) + 60) and S degree [MX 2:2]/(J K(-1) mol(-1)) approximately 1382V(m) + 16. PMID:16471990
Budget estimates: Fiscal year 1994. Volume 1: Agency summary
NASA Technical Reports Server (NTRS)
1994-01-01
The NASA FY 1994 budget request of $15,265 million concentrates on (1) investing in the development of new technologies including a particularly aggressive program in aeronautical technology to improve the competitive position of the United States, through shared involvement with industry and other government agencies; (2) continuing the nation's premier program of space exploration, to expand our knowledge of the solar system and the universe as well as the earth; and (3) providing safe and assured access to space using both the space shuttle and expendable launch vehicles. Budget estimates are presented for (1) research and development, including space station, space transportation capability development, space science and applications programs, space science, life and microgravity sciences and applications, mission to planet earth, space research and technology, commercial programs, aeronautics technology programs, safety and mission quality, academic programs, and tracking and data advanced systems; and (2) space operations, including space transportation programs, launch services, and space communications.
1998-02-01
This volume contains information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the six options described in Volume 1, Section 2: Option 1 -- Total removal clean closure; No subsequent use; Option 2 -- Risk-based clean closure; LLW fill; Option 3 -- Risk-based clean closure; CERCLA fill; Option 4 -- Close to RCRA landfill standards; LLW fill; Option 5 -- Close to RCRA landfill standards; CERCLA fill; and Option 6 -- Close to RCRA landfill standards; Clean fill. This volume is divided into two portions. The first portion contains the cost and planning schedule estimates while the second portion contains life-cycle costs and yearly cash flow information for each option.
Estimation of adipose compartment volumes in CT images of a mastectomy specimen
NASA Astrophysics Data System (ADS)
Imran, Abdullah-Al-Zubaer; Pokrajac, David D.; Maidment, Andrew D. A.; Bakic, Predrag R.
2016-03-01
Anthropomorphic software breast phantoms have been utilized for preclinical quantitative validation of breast imaging systems. Efficacy of the simulation-based validation depends on the realism of phantom images. Anatomical measurements of the breast tissue, such as the size and distribution of adipose compartments or the thickness of Cooper's ligaments, are essential for the realistic simulation of breast anatomy. Such measurements are, however, not readily available in the literature. In this study, we assessed the statistics of adipose compartments as visualized in CT images of a total mastectomy specimen. The specimen was preserved in formalin, and imaged using a standard body CT protocol and high X-ray dose. A human operator manually segmented adipose compartments in reconstructed CT images using ITK-SNAP software, and calculated the volume of each compartment. In addition, the time needed for the manual segmentation and the operator's confidence were recorded. The average volume, standard deviation, and the probability distribution of compartment volumes were estimated from 205 segmented adipose compartments. We also estimated the potential correlation between the segmentation time, operator's confidence, and compartment volume. The statistical tests indicated that the estimated compartment volumes do not follow the normal distribution. The compartment volumes are found to be correlated with the segmentation time; no significant correlation between the volume and the operator confidence. The performed study is limited by the mastectomy specimen position. The analysis of compartment volumes will better inform development of more realistic breast anatomy simulation.
Elci, Hakan; Turk, Necdet
2014-01-01
Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (Jv) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (Vb), the mean volumetric joint count (Jvb) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (Vin) and volumetric joint count (Jvi) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements. PMID:24696642
Elci, Hakan; Turk, Necdet
2014-01-01
Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (J(v)) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (V(b)), the mean volumetric joint count (J(vb)) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (V(in)) and volumetric joint count (J(vi)) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements. PMID:24696642
Estimation of tephra volumes from sparse and incompletely observed deposit thicknesses
NASA Astrophysics Data System (ADS)
Green, Rebecca M.; Bebbington, Mark S.; Jones, Geoff; Cronin, Shane J.; Turner, Michael B.
2016-04-01
We present a Bayesian statistical approach to estimate volumes for a series of eruptions from an assemblage of sparse proximal and distal tephra (volcanic ash) deposits. Most volume estimates are of widespread tephra deposits from large events using isopach maps constructed from observations at exposed locations. Instead, we incorporate raw thickness measurements, focussing on tephra thickness data from cores extracted from lake sediments and through swamp deposits. This facilitates investigation into the dispersal pattern and volume of tephra from much smaller eruption events. Given the general scarcity of data and the physical phenomena governing tephra thickness attenuation, a hybrid Bayesian-empirical tephra attenuation model is required. Point thickness observations are modeled as a function of the distance and angular direction of each location. The dispersal of tephra from larger well-estimated eruptions are used as leverage for understanding the smaller unknown events, and uncertainty in thickness measurements can be properly accounted for. The model estimates the wind and site-specific effects on the tephra deposits in addition to volumes. Our technique is exemplified on a series of tephra deposits from Mt Taranaki (New Zealand). The resulting estimates provide a comprehensive record suitable for supporting hazard models. Posterior mean volume estimates range from 0.02 to 0.26 km 3. Preliminary examination of the results suggests a size-predictable relationship.
Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism
Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.
2016-01-01
Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the
Vegetation cover and volume estimates in semi-arid rangelands using LiDAR and hyperspectral data
NASA Astrophysics Data System (ADS)
Spaete, L.; Mitchell, J.; Glenn, N. F.; Shrestha, R.; Sankey, T. T.; Murgoitio, J.; Gould, S.; Leedy, T.; Hardegree, S. P.; Boise Center Aerospace Laboratory
2011-12-01
Sagebrush covers 1.1 x 106 km2 of North American rangelands and is an important cover type for many species. Like most vegetation, sagebrush cover and height varies across the landscape. Accurately mapping this variation is important for certain species, such as the greater sage-grouse, where sagebrush percent cover, visual cover and height are important characteristics for habitat selection. Cover and height are also important factors when trying to estimate rangeland biomass, which is an indicator of forage potential, species dominance and hydrologic function. Several studies have investigated the ability of remote sensing to accurately map vegetation cover, height and volume using a variety of remote sensing technologies. However, no known studies have used a combined spectral and spatial approach for integrative mapping of these characteristics. We demonstrate the ability of terrestrial laser scanning (TLS), airborne Light Detection and Ranging (LiDAR), hyperspectral imagery, and Object Based Image Analysis (OBIA) to accurately estimate sagebrush cover, height and biomass metrics for semi-arid rangeland environments.
Improved pressure-volume-temperature method for estimation of cryogenic liquid volume
NASA Astrophysics Data System (ADS)
Seo, Mansu; Jeong, Sangkwon; Jung, Young-suk; Kim, Jakyung; Park, Hana
2012-04-01
One of the most important issues in a liquid propellant rocket is to measure the amount of remaining liquid propellant under low gravity environment during space mission. This paper presents the results of experiment and analysis of a pressure-volume-temperature (PVT) method which is a gauging method for low gravity environment. The experiment is conducted using 7.4 l tank for liquid nitrogen with various liquid-fill levels. To maximize the accuracy of a PVT method with minimum hardware, the technique of a helium injection with low mass flow rate is applied to maintain stable temperature profile in the ullage volume. The PVT analysis considering both pressurant and cryogen as a binary mixture is suggested. At high liquid-fill levels of 72-80%, the accuracy from the conventional PVT analysis is within 4.6%. At low fill levels of 27-30%, the gauging error is within 3.4% by mixture analysis of a PVT method with specific low mass flow rate of a helium injection. It is concluded that the proper mass flow rate of a helium injection and PVT analyses are crucial to enhance the accuracy of the PVT method with regard to various liquid-fill levels.
Solid Waste Operations Complex W-113: Project cost estimate. Preliminary design report. Volume IV
1995-01-01
This document contains Volume IV of the Preliminary Design Report for the Solid Waste Operations Complex W-113 which is the Project Cost Estimate and construction schedule. The estimate was developed based upon Title 1 material take-offs, budgetary equipment quotes and Raytheon historical in-house data. The W-113 project cost estimate and project construction schedule were integrated together to provide a resource loaded project network.
Ju, Lili; Tian, Li; Wang, Desheng
2009-01-01
In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.
Estimates of the Volume of Snowpack Sublimation in Arizona's Salt River Watershed
NASA Astrophysics Data System (ADS)
Svoma, B. M.
2012-12-01
The liquid equivalent volumes of snowpack sublimation, melt, and snowfall over the Salt River watershed, a major source of water for the Phoenix metropolitan area, will be estimated using the National Operational Hydrologic Remote Sensing Center's Snow Data Assimilation System (SNODAS) for the nine water years on record (i.e., 2004-2012). SNODAS integrates data from satellites, aircraft, and ground stations with downscaled output from numerical weather prediction models and an energy/mass balance snowpack model. The SNODAS dataset contains daily values of sublimation, snow water equivalent, snowfall, and melt, among other variables, at high (< 1 km2) resolution providing the opportunity to accurately estimate the volumes of snowpack balance variables for regions with complex topography. Snowpack ablation consists of sublimation and melting. Snow particles at sub-freezing temperatures will sublimate rather than melt if surrounded by air that is below the equilibrium water vapor pressure with respect to ice. When sublimation occurs, there is a direct loss of water from the given drainage basin when the vapor is carried away by the prevailing atmospheric flow. Preliminary analyses of water years 2005 (wet El Niño), 2007 (dry El Niño), 2008 (wet La Niña), and 2012 (dry La Niña) suggest that there is a substantial amount of sublimation over the Salt River watershed. From October 1 to April 30, approximately 16 percent of snowfall sublimated during the four years, ranging from approximately 98 million cubic meters (79,884 acre-feet) in water year 2005 to approximately 208 million cubic meters (168,726 acre-feet) in water year 2012. Sublimation is the most prevalent at the highest elevations of the watershed with more than 30 percent of snowfall sublimating at elevations above 2,744 meters above sea level. Of the four years analyzed, the sublimation to snowfall ratio was the highest for the two water years with anomalously high precipitation (i.e, 2005 and 2008). This
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
Hosangadi, A.; Sinha, N.; Dash, S.M. )
1992-01-01
A new Eulerian particulate solver whose numerical formulation is compatible with the numerics in state-of-the-art finite-volume upwind/implicit gas dynamic computer codes is presented. The heat transfer, drag, thermodynamic, and phase-change procedures in this code are derived from earlier, well established data fits and procedures. Performance for numerous flow problems with one- and two-way coupling is quite good. The solutions are nonoscillatory and robust and conserve flux balances very well. 18 refs.
Rapid estimate of solid volume in large tuff cores using a gas pycnometer
Thies, C.; Geddis, A.M.; Guzman, A.G.
1996-09-01
A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.
Acer, N; Bayar, B; Basaloglu, H; Oner, E; Bayar, K; Sankur, S
2008-11-20
The size and shape of tarsal bones are especially relevant when considering some orthopedic diseases such as clubfoot. For this reason, the measurements of the tarsal bones have been the subject of many studies, none of which has used stereological methods to estimate the volume. In the present stereological study, we estimated the volume of calcaneal bone of normal feet and dry bones. We used a combination of the Cavalieri principle and computer tomographic scans taken from eight males and nine dry calcanei to estimate the volumes of calcaneal bones. The mean volume of dry calcaneal bones was estimated, producing mean results using the point-counting method and Archimedes principle being 49.11+/-10.7 or 48.22+/-11.92 cm(3), respectively. A positive correlation was found between anthropometric measurements and the volume of calcaneal bones. The findings of the present study using the stereological methods could provide data for the evaluation of normal and pathological volumes of calcaneal bones. PMID:18723333
NASA Astrophysics Data System (ADS)
Liu, Yu; Yu, Xiping
2016-09-01
A coupled phase-field and volume-of-fluid method is developed to study the sensitive behavior of water waves during breaking. The THINC model is employed to solve the volume-of-fluid function over the entire domain covered by a relatively coarse grid while the phase-field model based on Allen-Cahn equation is applied over the fine grid. A special algorithm that takes into account the sharpness of the diffuse-interface is introduced to correlate the order parameter obtained on the fine grid and the volume-of-fluid function obtained on the coarse grid. The coupled model is then applied to the study of water waves generated by moving pressures on the free surface. The deformation process of the wave crest during the initial stage of breaking is discussed in details. It is shown that there is a significant variation of the free nappe developed at the front side of the wave crest as the wave steepness differs. It is of a plunging type at large wave steepness while of a spilling type at small wave steepness. The numerical results also indicate that breaking occurs later and the duration of breaking is shorter for waves of smaller steepness and vice versa. Neglecting the capillary effect leads to wave breaking with a sharper nappe and a more dynamic plunging process. The surface tension also has an effect to prevent the formation of a free nappe at the front side of the wave crest in some cases.
Reproducibility of isopach data and estimates of dispersal and eruption volumes
NASA Astrophysics Data System (ADS)
Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.
2012-12-01
Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
New formulae for estimating lava flow volumes at Mt. Etna Volcano, Sicily
NASA Astrophysics Data System (ADS)
Murray, J. B.; Stevens, N. F.
A new data set of Etna lava flows erupted since 1868 has been compiled from eight topographic maps of the volcano published at intervals since then. Volumes of 59 flows or groups of flows were measured from topographic difference maps. Most of these volumes are likely to be considerably more accurate than those published previously. We cut the number of flow volumes down to 25 by selecting those examples for which the volume of an individual eruption could be derived with the highest accuracy. This refined data set was searched for high correlations between flow volume and more directly measurable parameters. Only two parameters showed a correlation coefficient of 70% or greater: planimetric flow area A (70%) and duration of the eruption D (79%). If only short duration (<18 days) flows were used, flow length cubed, L3, had a correlation coefficient of 98%. Using combinations of measured parameters, much more significant correlations with volume were found. Dh had a correlation coefficient of 90% (h is the hydrostatic head of magma above the vent), and , 92% (where W is mean width and E is the degree of topographic enclosure), and a combination of the two , 97%. These latter formulae were used to derive volumes of all eruptions back to 1868 to compare with those from the complete data set. Values determined from the formulae were, on average, lower by 16% (Dh), 7% ( , and 19% .
Estimated maximal and current brain volume predict cognitive ability in old age.
Royle, Natalie A; Booth, Tom; Valdés Hernández, Maria C; Penke, Lars; Murray, Catherine; Gow, Alan J; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M
2013-12-01
Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging.
Optimal volume Wegner estimate for random magnetic Laplacians on Z2
NASA Astrophysics Data System (ADS)
Hasler, David; Luckett, Daniel
2013-03-01
We consider a two dimensional magnetic Schrödinger operator on a square lattice with a spatially stationary random magnetic field. We prove a Wegner estimate with optimal volume dependence. The Wegner estimate holds around the spectral edges, and it implies Hölder continuity of the integrated density of states in this region. The proof is based on the Wegner estimate obtained in Erdős and Hasler ["Wegner estimate for random magnetic Laplacians on {{Z}}^2," Ann. Henri Poincaré 12, 1719-1731 (2012)], 10.1007/s00023-012-0177-9.
Acer, Niyazi; Ilıca, Ahmet Turan; Turgut, Ahmet Tuncay; Ozçelik, Ozlem; Yıldırım, Birdal; Turgut, Mehmet
2012-01-01
Pineal gland is a very important neuroendocrine organ with many physiological functions such as regulating circadian rhythm. Radiologically, the pineal gland volume is clinically important because it is usually difficult to distinguish small pineal tumors via magnetic resonance imaging (MRI). Although many studies have estimated the pineal gland volume using different techniques, to the best of our knowledge, there has so far been no stereological work done on this subject. The objective of the current paper was to determine the pineal gland volume using stereological methods and by the region of interest (ROI) on MRI. In this paper, the pineal gland volumes were calculated in a total of 62 subjects (36 females, 26 males) who were free of any pineal lesions or tumors. The mean ± SD pineal gland volumes of the point-counting, planimetry, and ROI groups were 99.55 ± 51.34, 102.69 ± 40.39, and 104.33 ± 40.45 mm(3), respectively. No significant difference was found among the methods of calculating pineal gland volume (P > 0.05). From these results, it can be concluded that each technique is an unbiased, efficient, and reliable method, ideally suitable for in vivo examination of MRI data for pineal gland volume estimation.
An Approach to the Use of Depth Cameras for Weed Volume Estimation.
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-06-25
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.
An Approach to the Use of Depth Cameras for Weed Volume Estimation.
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-01-01
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. PMID:27347972
An Approach to the Use of Depth Cameras for Weed Volume Estimation
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-01-01
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. PMID:27347972
Estimation of convective rain volumes utilizing the are-time-integral technique
NASA Technical Reports Server (NTRS)
Johnson, L. Ronald; Smith, Paul L.
1990-01-01
Interest in the possibility of developing useful estimates of convective rainfall with Area-Time Integral (ATI) methods is increasing. The basis of the ATI technique is the observed strong correlation between rainfall volumes and ATI values. This means that rainfall can be estimated by just determining the ATI values, if previous knowledge of the relationship to rain volume is available to calibrate the technique. Examples are provided of the application of the ATI approach to gage, radar, and satellite measurements. For radar data, the degree of transferability in time and among geographical areas is examined. Recent results on transferability of the satellite ATI calculations are presented.
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N.J.
2014-01-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3–4 s post-injection trigger signal and at 9–12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump. PMID:24355621
NASA Astrophysics Data System (ADS)
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.
2014-02-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.
Estimation of cell volume and biomass of penicillium chrysogenum using image analysis.
Packer, H L; Keshavarz-Moore, E; Lilly, M D; Thomas, C R
1992-02-20
A methodology for the estimation of biomass for the penicillin fermentation using image analysis is presented. Two regions of hyphae are defined to describe the growth of mycelia during fermentation: (1) the cytoplasmic region, and (2) the degenerated region including large vacuoles. The volume occupied by each of these regions in a fixed volume of sample is estimated from area measurements using image analysis. Areas are converted to volumes by treating the hyphae as solid cylinders with the hyphal diameter as the cylinder diameter. The volumes of the cytoplasmic and degenerated regions are converted into dry weight estimations using hyphal density values available from the literature. The image analysis technique is able to estimate biomass even in the presence of nondissolved solids of a concentration of up to 30 gL(-1). It is shown to estimate successfully concentrations of mycelia from 0.03 to 38 gL(-1). Although the technique has been developed for the penicillin fermentation, it should be applicable to other (nonpellected) fungal fermentations.
NASA Technical Reports Server (NTRS)
Dewberry, B.
2000-01-01
Electrical impedance spectrometry involves measurement of the complex resistance of a load at multiple frequencies. With this information in the form of impedance magnitude and phase, or resistance and reactance, basic structure or function of the load can be estimated. The "load" targeted for measurement and estimation in this study consisted of the water-bearing tissues of the human calf. It was proposed and verified that by measuring the electrical impedance of the human calf and fitting this data to a model of fluid compartments, the lumped-model volume of intracellular and extracellular spaces could be estimated, By performing this estimation over time, the volume dynamics during application of stimuli which affect the direction of gravity can be viewed. The resulting data can form a basis for further modeling and verification of cardiovascular and compartmental modeling of fluid reactions to microgravity as well as countermeasures to the headward shift of fluid during head-down tilt or spaceflight.
Estimating stem volume and biomass of Pinus koraiensis using LiDAR data.
Kwak, Doo-Ahn; Lee, Woo-Kyun; Cho, Hyun-Kook; Lee, Seung-Ho; Son, Yowhan; Kafatos, Menas; Kim, So-Ra
2010-07-01
The objective of this study was to estimate the stem volume and biomass of individual trees using the crown geometric volume (CGV), which was extracted from small-footprint light detection and ranging (LiDAR) data. Attempts were made to analyze the stem volume and biomass of Korean Pine stands (Pinus koraiensis Sieb. et Zucc.) for three classes of tree density: low (240 N/ha), medium (370 N/ha), and high (1,340 N/ha). To delineate individual trees, extended maxima transformation and watershed segmentation of image processing methods were applied, as in one of our previous studies. As the next step, the crown base height (CBH) of individual trees has to be determined; information for this was found in the LiDAR point cloud data using k-means clustering. The LiDAR-derived CGV and stem volume can be estimated on the basis of the proportional relationship between the CGV and stem volume. As a result, low tree-density plots had the best performance for LiDAR-derived CBH, CGV, and stem volume (R (2) = 0.67, 0.57, and 0.68, respectively) and accuracy was lowest for high tree-density plots (R (2) = 0.48, 0.36, and 0.44, respectively). In the case of medium tree-density plots accuracy was R (2) = 0.51, 0.52, and 0.62, respectively. The LiDAR-derived stem biomass can be predicted from the stem volume using the wood basic density of coniferous trees (0.48 g/cm(3)), and the LiDAR-derived above-ground biomass can then be estimated from the stem volume using the biomass conversion and expansion factors (BCEF, 1.29) proposed by the Korea Forest Research Institute (KFRI). PMID:20182905
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
Seevers, P.M.; Sadowski, F.C.; Lauer, D.T.
1990-01-01
Retrospective satellite image data were evaluated for their ability to demonstrate the influence of center-pivot irrigation development in western Nebraska on spectral change and climate-related factors for the region. Periodic images of an albedo index and a normalized difference vegetation index (NDVI) were generated from calibrated Landsat multispectral scanner (MSS) data and used to monitor spectral changes associated with irrigation development from 1972 through 1986. The albedo index was not useful for monitoring irrigation development. For the NDVI, it was found that proportions of counties in irrigated agriculture, as discriminated by a threshold, were more highly correlated with reported ground estimates of irrigated agriculture than were county mean greenness values. A similar result was achieved when using coarse resolution Advanced Very High Resolution Radiometer (AVHRR) image data for estimating irrigated agriculture. The NDVI images were used to evaluate a procedure for making areal estimates of actual evapotranspiration (ET) volumes. Estimates of ET volumes for test counties, using reported ground acreages and corresponding standard crop coefficients, were correlated with the estimates of ET volume using crop coefficients scaled to NDVI values and pixel counts of crop areas. These county estimates were made under the assumption that soil water availability was unlimited. For nonirrigated vegetation, this may result in over-estimation of ET volumes. Ground information regarding crop types and acreages are required to derive the NDVI scaling factor. Potential ET, estimated with the Jensen-Haise model, is common to both methods. These results, achieved with both MSS and AVHRR data, show promise for providing climatologically important land surface information for regional and global climate models. ?? 1990 Kluwer Academic Publishers.
Rueda, Andrea; Acosta, Oscar; Couprie, Michel; Bourgeat, Pierrick; Fripp, Jurgen; Dowson, Nicholas; Romero, Eduardo; Salvado, Olivier
2010-05-15
In magnetic resonance imaging (MRI), accuracy and precision with which brain structures may be quantified are frequently affected by the partial volume (PV) effect. PV is due to the limited spatial resolution of MRI compared to the size of anatomical structures. Accurate classification of mixed voxels and correct estimation of the proportion of each pure tissue (fractional content) may help to increase the precision of cortical thickness estimation in regions where this measure is particularly difficult, such as deep sulci. The contribution of this work is twofold: on the one hand, we propose a new method to label voxels and compute tissue fractional content, integrating a mechanism for detecting sulci with topology preserving operators. On the other hand, we improve the computation of the fractional content of mixed voxels using local estimation of pure tissue intensity means. Accuracy and precision were assessed using simulated and real MR data and comparison with other existing approaches demonstrated the benefits of our method. Significant improvements in gray matter (GM) classification and cortical thickness estimation were brought by the topology correction. The fractional content root mean squared error diminished by 6.3% (p<0.01) on simulated data. The reproducibility error decreased by 8.8% (p<0.001) and the Jaccard similarity measure increased by 3.5% on real data. Furthermore, compared with manually guided expert segmentations, the similarity measure was improved by 12.0% (p<0.001). Thickness estimation with the proposed method showed a higher reproducibility compared with the measure performed after partial volume classification using other methods.
A voxel-based technique to estimate the volume of trees from terrestrial laser scanner data
NASA Astrophysics Data System (ADS)
Bienert, A.; Hess, C.; Maas, H.-G.; von Oheimb, G.
2014-06-01
The precise determination of the volume of standing trees is very important for ecological and economical considerations in forestry. If terrestrial laser scanner data are available, a simple approach for volume determination is given by allocating points into a voxel structure and subsequently counting the filled voxels. Generally, this method will overestimate the volume. The paper presents an improved algorithm to estimate the wood volume of trees using a voxel-based method which will correct for the overestimation. After voxel space transformation, each voxel which contains points is reduced to the volume of its surrounding bounding box. In a next step, occluded (inner stem) voxels are identified by a neighbourhood analysis sweeping in the X and Y direction of each filled voxel. Finally, the wood volume of the tree is composed by the sum of the bounding box volumes of the outer voxels and the volume of all occluded inner voxels. Scan data sets from several young Norway maple trees (Acer platanoides) were used to analyse the algorithm. Therefore, the scanned trees as well as their representing point clouds were separated in different components (stem, branches) to make a meaningful comparison. Two reference measurements were performed for validation: A direct wood volume measurement by placing the tree components into a water tank, and a frustum calculation of small trunk segments by measuring the radii along the trunk. Overall, the results show slightly underestimated volumes (-0.3% for a probe of 13 trees) with a RMSE of 11.6% for the individual tree volume calculated with the new approach.
Determination of thigh volume in youth with anthropometry and DXA: agreement between estimates.
Coelho-E-Silva, Manuel J; Malina, Robert M; Simões, Filipe; Valente-Dos-Santos, João; Martins, Raul A; Vaz Ronque, Enio R; Petroski, Edio L; Minderico, Claudia; Silva, Analiza M; Baptista, Fátima; Sardinha, Luís B
2013-01-01
This study examined the agreement between estimates of thigh volume (TV) with anthropometry and dual-energy x-ray absorptiometry (DXA) in healthy school children. Participants (n=168, 83 boys and 85 girls) were school children 10.0-13.9 years of age. In addition to body mass, height and sitting height, anthropometric dimensions included those needed to estimate TV using the equation of Jones & Pearson. Total TV was also estimated with DXA. Agreement between protocols was examined using linear least products regression (Deming regressions). Stepwise regression of log-transformed variables identified variables that best predicted TV estimated by DXA. The regression models were then internally validated using the predicted residual sum of squares method. Correlation between estimates of TV was 0.846 (95%CI: 0.796-0.884, Sy·x=0.152 L). It was possible to obtain an anthropometry-based model to improve the prediction of TVs in youth. The total volume by DXA was best predicted by adding body mass and sum of skinfolds to volume estimated with the equation of Jones & Pearson (R=0.972; 95%CI: 0.962-0.979; R (2)=0.945).
Estimating urea volume in amputees on peritoneal dialysis by modified anthropometric formulas.
Tzamaloukas, A H; Murata, G H
1996-01-01
Body composition determines body water content (the fraction body water/body weight). With developing obesity, body weight and body water increase, but body water content decreases. The anthropometric formulas for urea volume (body water) for Kt/V computations in nonamputated peritoneal dialysis subjects reflect this fundamental rule of body composition. However, the use of uncorrected anthropometric formulas in amputees provides body water content estimates inconsistent with the estimates of body composition obtained from nutritional assessment. Corrected estimates of urea volume can be obtained in three steps: (1) The non-amputated weight at the same body composition is computed by dividing the weight at the urea kinetic study (postamputation) by (1-the fractional weight loss from the amputation); (2) body water and body water content at this nonamputated weight are obtained from the appropriate anthropometric formula; (3) at the time of the urea kinetic study, post-amputation, body water is equal to the estimate of body water content obtained from step 2 times the body weight at the urea kinetic study. The corrected estimates of urea volume provide body water content values agreeing with the estimates from nutritional assessment.
Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors
NASA Astrophysics Data System (ADS)
Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood
2000-08-01
Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.
NASA Astrophysics Data System (ADS)
Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.
2015-12-01
Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi
Tao, Rong; Popescu, Elena-Anda; Drake, William B.; Jackson, David N.; Popescu, Mihai
2012-01-01
Previous studies based on fetal magnetocardiographic (fMCG) recordings used simplified volume conductor models to estimate the fetal cardiac vector as an unequivocal measure of the cardiac source strength. However, the effect of simplified volume conductor modeling on the accuracy of the fMCG inverse solution remains largely unknown. Aiming to determine the sensitivity of the source estimators to the details of the volume conductor model, we performed simulations using fetal-maternal anatomical information from ultrasound images obtained in 20 pregnant women in various stages of pregnancy. The magnetic field produced by a cardiac source model was computed using the boundary element method for a piecewise homogeneous volume conductor with three nested compartments (fetal body, amniotic fluid and maternal abdomen) of different electrical conductivities. For late gestation, we also considered the case of a fourth highly insulating layer of vernix caseosa covering the fetus. The errors introduced for simplified volume conductors were assessed by comparing the reconstruction results obtained with realistic versus spherically symmetric models. Our study demonstrates a significant effect of simplified volume conductor modeling, resulting mainly in an underestimation of the cardiac vector magnitude and low goodness-of-fit. These findings are confirmed by the analysis of real fMCG data recorded in mid-gestation. PMID:22442179
Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula
Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.
2003-01-01
Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).
A comparison of gradient estimation methods for volume rendering on unstructured meshes.
Correa, Carlos D; Hero, Robert; Ma, Kwan-Liu
2011-03-01
This paper presents a study of gradient estimation methods for rendering unstructured-mesh volume data. Gradient estimation is necessary for rendering shaded isosurfaces and specular highlights, which provide important cues for shape and depth. Gradient estimation has been widely studied and deployed for regular-grid volume data to achieve local illumination effects, but has been, otherwise, for unstructured-mesh data. As a result, most of the unstructured-mesh volume visualizations made so far were unlit. In this paper, we present a comprehensive study of gradient estimation methods for unstructured meshes with respect to their cost and performance. Through a number of benchmarks, we discuss the effects of mesh quality and scalar function complexity in the accuracy of the reconstruction, and their impact in lighting-enabled volume rendering. Based on our study, we also propose two heuristic improvements to the gradient reconstruction process. The first heuristic improves the rendering quality with a hybrid algorithm that combines the results of the multiple reconstruction methods, based on the properties of a given mesh. The second heuristic improves the efficiency of its GPU implementation, by restricting the computation of the gradient on a fixed-size local neighborhood. PMID:21233515
Cost and price estimate of Brayton and Stirling engines in selected production volumes
NASA Technical Reports Server (NTRS)
Fortgang, H. R.; Mayers, H. F.
1980-01-01
The methods used to determine the production costs and required selling price of Brayton and Stirling engines modified for use in solar power conversion units are presented. Each engine part, component and assembly was examined and evaluated to determine the costs of its material and the method of manufacture based on specific annual production volumes. Cost estimates are presented for both the Stirling and Brayton engines in annual production volumes of 1,000, 25,000, 100,000 and 400,000. At annual production volumes above 50,000 units, the costs of both engines are similar, although the Stirling engine costs are somewhat lower. It is concluded that modifications to both the Brayton and Stirling engine designs could reduce the estimated costs.
Gross-merchantable timber volume estimation using an airborne lidar system
NASA Technical Reports Server (NTRS)
Maclean, G. A.; Krabill, W. B.
1986-01-01
A preliminary study to determine the utility of an airborne laser as a tool for use by forest managers to estimate gross-merchantable timber volume was conducted near the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center, Wallops Flight Facility utilizing the NASA Airborne Oceanographic Lidar (AOL) system. Measured timber volume was regressed against the cross-sectional area of an AOL-generated profile of forest at the same location. The AOL profile area was found to be a very significant variable in the estimation of gross-merchantable timber volume. Significant improvements were obtained when the data were stratified by species. The overall R-squared value obtained was 0.921 with the regression significant at the one percent level.
Madureira, Tânia Vieira; Lopes, Célia; Malhão, Fernanda; Rocha, Eduardo
2015-02-01
Accurately accessing changes in the intracellular volumes (or numbers) of peroxisomes within a cell can be a lengthy task, because unbiased estimations can be made only by studies conducted under transmission electron microscopy. Yet, such information is often required, namely for correlations with functional data. The optimization and applicability of a fast and new technical proceeding based on catalase immunofluorescence was implemented herein by using primary hepatocytes from brown trout (Salmo trutta f. fario), exposed during 96 h to two distinct treatments (0.1% ethanol and 50 µM of 17α-ethynylestradiol). The time and cost efficiency, together with the results obtained by stereological analyses, specifically directed to the volume densities of peroxisomes, and additionally of the nucleus in relation to the hepatocyte, were compared with the well-established 3,3'-diaminobenzidine cytochemistry for electron microscopy. With the immuno technique it was possible to correctly distinguish punctate peroxisomal profiles, allowing the selection of the marked organelles for quantification. By both methodologies, a significant reduction in the volume density of the peroxisome within the hepatocyte was obtained after an estrogenic input. The most interesting point here was that the volume density ratios were quite correlated between both techniques. Overall, the immunofluorescence protocol for catalase was evidently faster, cheaper and provided reliable quantitative data that discriminated in the same way the compared groups. After this validation study, we recommend the use of catalase immunofluorescence as the first option for rapid screening of changes of the amount of hepatocytic peroxisomes, using their volume density as an indicator.
F. SOUTO; A HEGER
2001-02-01
Aqueous homogeneous solution reactors have been proposed for the production of medical isotopes. However, the reactivity effects of fuel solution volume change, due to formation of radiolytic gas bubbles and thermal expansion, have to be mitigated to allow steady-state operation of solution reactors. The results of the free run experiments analyzed indicate that the proposed model to estimate the void volume due to radiolytic gas bubbles and thermal expansion in solution reactors can accurately describe the observed behavior during the experiments. This void volume due to radiolytic gas bubbles and fuel solution thermal expansion can then be used in the investigation of reactivity effects in fissile solutions. In addition, these experiments confirm that the radiolytic gas bubbles are formed at a higher temperature than the fuel solution temperature. These experiments also indicate that the mole-weighted average for the radiolytic gas bubbles in uranyl fluoride solutions is about 1 {micro}m. Finally, it should be noted that another model, currently under development, would simulate the power behavior during the transient given the initial fuel solution level and density. The model is based on Monte Carlo simulation with the MCNP computer code [Briesmeister, 1997] to obtain the reactor reactivity as a function of the fuel solution density, which, in turn, changes due to thermal expansion and radiolytic gas bubble formation.
Southekal, Sudeepti; McQuaid, Sarah J; Kijewski, Marie Foley; Moore, Stephen C
2012-02-01
A new method of compensating for tissue-fraction and count-spillover effects, which require tissue segmentation only within a small volume surrounding the primary lesion of interest, was evaluated for SPECT imaging. Tissue-activity concentration estimates are obtained by fitting the measured projection data to a statistical model of the segmented tissue projections. Multiple realizations of two simulated human-torso phantoms, each containing 20 spherical 'tumours', 1.6 cm in diameter, with tumour-to-background ratios of 8:1 and 4:1, were simulated. Estimates of tumour- and background-activity concentration values for homogeneous as well as inhomogeneous tissue activities were compared to the standard uptake value (SUV) metrics on the basis of accuracy and precision. For perfectly registered, high-contrast, superficial lesions in a homogeneous background without scatter, the method yielded accurate (<0.4% bias) and precise (<6.1%) recovery of the simulated activity values, significantly outperforming the SUV metrics. Tissue inhomogeneities, greater tumour depth and lower contrast ratios degraded precision (up to 11.7%), but the estimates remained almost unbiased. The method was comparable in accuracy but more precise than a well-established matrix inversion approach, even when errors in tumour size and position were introduced to simulate moderate inaccuracies in segmentation and image registration. Photon scatter in the object did not significantly affect the accuracy or precision of the estimates. PMID:22241591
NASA Astrophysics Data System (ADS)
De Vuyst, Florian
2004-01-01
This exploratory work tries to present first results of a novel approach for the numerical approximation of solutions of hyperbolic systems of conservation laws. The objective is to define stable and "reasonably" accurate numerical schemes while being free from any upwind process and from any computation of derivatives or mean Jacobian matrices. That means that we only want to perform flux evaluations. This would be useful for "complicated" systems like those of two-phase models where solutions of Riemann problems are hard, see impossible to compute. For Riemann or Roe-like solvers, each fluid model needs the particular computation of the Jacobian matrix of the flux and the hyperbolicity property which can be conditional for some of these models makes the matrices be not R-diagonalizable everywhere in the admissible state space. In this paper, we rather propose some numerical schemes where the stability is obtained using convexity considerations. A certain rate of accuracy is also expected. For that, we propose to build numerical hybrid fluxes that are convex combinations of the second-order Lax-Wendroff scheme flux and the first-order modified Lax-Friedrichs scheme flux with an "optimal" combination rate that ensures both minimal numerical dissipation and good accuracy. The resulting scheme is a central scheme-like method. We will also need and propose a definition of local dissipation by convexity for hyperbolic or elliptic-hyperbolic systems. This convexity argument allows us to overcome the difficulty of nonexistence of classical entropy-flux pairs for certain systems. We emphasize the systematic feature of the method which can be fastly implemented or adapted to any kind of systems, with general analytical or data-tabulated equations of state. The numerical results presented in the paper are not superior to many existing state-of-the-art numerical methods for conservation laws such as ENO, MUSCL or central scheme of Tadmor and coworkers. The interest is rather
NASA Astrophysics Data System (ADS)
Kim, K. M.
2016-06-01
Traditional field methods for measuring tree heights are often too costly and time consuming. An alternative remote sensing approach is to measure tree heights from digital stereo photographs which is more practical for forest managers and less expensive than LiDAR or synthetic aperture radar. This work proposes an estimation of stand height and forest volume(m3/ha) using normalized digital surface model (nDSM) from high resolution stereo photography (25cm resolution) and forest type map. The study area was located in Mt. Maehwa model forest in Hong Chun-Gun, South Korea. The forest type map has four attributes such as major species, age class, DBH class and crown density class by stand. Overlapping aerial photos were taken in September 2013 and digital surface model (DSM) was created by photogrammetric methods(aerial triangulation, digital image matching). Then, digital terrain model (DTM) was created by filtering DSM and subtracted DTM from DSM pixel by pixel, resulting in nDSM which represents object heights (buildings, trees, etc.). Two independent variables from nDSM were used to estimate forest stand volume: crown density (%) and stand height (m). First, crown density was calculated using canopy segmentation method considering live crown ratio. Next, stand height was produced by averaging individual tree heights in a stand using Esri's ArcGIS and the USDA Forest Service's FUSION software. Finally, stand volume was estimated and mapped using aerial photo stand volume equations by species which have two independent variables, crown density and stand height. South Korea has a historical imagery archive which can show forest change in 40 years of successful forest rehabilitation. For a future study, forest volume change map (1970s-present) will be produced using this stand volume estimation method and a historical imagery archive.
Estimation of single cell volume from 3D confocal images using automatic data processing
NASA Astrophysics Data System (ADS)
Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.
2012-06-01
Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Larson, Mats G.
2000-01-01
We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.
Weight and volume estimates for aluminum-air batteries designed for electric vehicle applications
Cooper, J.F.
1980-01-01
The weights and volumes of reactants, electrolyte, and hardware components are estimated for an aluminum-air battery designed for a 40-kW (peak), 70-kWh aluminum-air battery. Generalized equations are derived which express battery power and energy content as functions of total anode area, aluminum-anode weight, and discharge current density. Equations are also presented which express total battery weight and volume as linear combinations of the variables, anode area and anode weight. The sizing and placement of battery components within the engine compartment of typical five-passenger vehicles is briefly discussed.
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-04-27
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Piccoli, Antonio
2014-04-01
Both single-frequency bioimpedance and multiple-frequency spectroscopy are equally accurate in measuring total-body water and intracellular fluid. Estimates are consistent at a population level but not at the individual level, because of wide limits of agreement. There is no real 'gold standard' method providing estimates with absolute accuracy (in liters). Bioelectrical impedance vector analysis allows comparison of the actual body impedance with that of the reference population (in Ω/m). Hemodialysis prescription can be optimized with the use of this feedback.
Belzer, D.B. ); Serot, D.E. ); Kellogg, M.A. )
1991-03-01
Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner to allow evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key nonmanufacturing sectors. This volume lays out the general concepts and methods used to develop the emergency operating estimates. The historical analysis of capacity extends from 1974 through 1986. Some nonmanufacturing industries are included. In addition to mining and utilities, key industries in transportation, communication, and services were analyzed. Physical capacity and efficiency of production were measured. 3 refs., 2 figs., 12 tabs. (JF)
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.
Baseline estimate of the retained gas volume in Tank 241-C-106
Stewart, C.W.; Chen, G.
1998-06-01
This report presents the results of a study of the retained gas volume in Hanford Tank 241-C-106 (C-106) using the barometric pressure effect method. This estimate is required to establish the baseline conditions for sluicing the waste from C-106 into AY-102, scheduled to begin in the fall of 1998. The barometric pressure effect model is described, and the data reduction and detrending techniques are detailed. Based on the response of the waste level to the larger barometric pressure swings that occurred between October 27, 1997, and March 4, 1998, the best estimate and conservative (99% confidence) retained gas volumes in C-106 are 24 scm (840 scf) and 50 scm (1,770 scf), respectively. This is equivalent to average void fractions of 0.025 and 0.053, respectively.
A Progressive Black Top Hat Transformation Algorithm for Estimating Valley Volumes from DEM Data
NASA Astrophysics Data System (ADS)
Luo, W.; Pingel, T.; Heo, J.; Howard, A. D.
2013-12-01
The amount of valley incision and valley volume are important parameters in geomorphology and hydrology research, because they are related to the amount erosion (and thus the volume of sediments) and the amount of water needed to create the valley. This is not only the case for terrestrial research but also for planetary research as such figuring out how much water was on Mars. With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. However, previous studies typically use one single structuring element size for extracting the valley feature and one single threshold value for removing noise, resulting in some finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to separate above ground features and bare earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the structuring elements size is progressively increased to extract valleys of different orders. In addition, a slope based threshold was introduced to automatically adjust the threshold values for structuring elements with different sizes. Connectivity and shape parameters of the masked regions were used to keep the long linear valleys while removing other smaller non-connected regions. Preliminary application of the PBTH to Grand Canyon and two sites on Mars has produced promising results. More testing and fine-tuning is in progress. The ultimate goal of the project is to apply the algorithm to estimate the volume of valley networks on Mars and the volume of water needed to form the valleys we observe today and thus infer the nature of the hydrologic cycle on early Mars. The project is funded by NASA's Mars Data Analysis program.
Automatic atlas-based volume estimation of human brain regions from MR images
Andreasen, N.C.; Rajarethinam, R.; Cizadlo, T.; Arndt, S.
1996-01-01
MRI offers many opportunities for noninvasive in vivo measurement of structure-function relationships in the human brain. Although automated methods are now available for whole-brain measurements, an efficient and valid automatic method for volume estimation of subregions such as the frontal or temporal lobes is still needed. We adapted the Talairach atlas to the study of brain subregions. We supplemented the atlas with additional boxes to include the cerebellum. We assigned all the boxes to 1 of 12 regions of interest (ROIs) (frontal, parietal, temporal, and occipital lobes, cerebellum, and subcortical regions on right and left sides of the brain).Using T1-weighted MR scans collected with an SPGR sequence (slice thickness = 1.5 mm), we manually traced these ROIs and produced volume estimates. We then transformed the scans into Talairach space and compared the volumes produced by the two methods ({open_quotes}traced{close_quotes} versus {open_quotes}automatic{close_quotes}). The traced measurements were considered to be the {open_quotes}gold standard{close_quotes} against which the automatic measurements were compared. The automatic method was found to produce measurements that were nearly identical to the traced method. We compared absolute measurements of volume produced by the two methods, as well as the sensitivity and specificity of the automatic method. We also compared the measurements of cerebral blood flow obtained through [{sup 15}O]H{sub 2}O PET studies in a sample of nine subjects. Absolute measurements of volume produced by the two methods were very similar, and the sensitivity and specificity of the automatic method were found to be high for all regions. The flow values were also found to be very similar by both methods. The automatic atlas-based method for measuring the volume of brain subregions produces results that are similar to manual techniques. 39 refs., 4 figs., 3 tabs.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-05-07
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.
Marshall, B.D.; Neymark, L.A.; Peterman, Z.E.
2003-01-01
Low-temperature calcite and opal record the past seepage of water into open fractures and lithophysal cavities in the unsaturated zone at Yucca Mountain, Nevada, site of a proposed high-level radioactive waste repository. Systematic measurements of calcite and opal coatings in the Exploratory Studies Facility (ESF) tunnel at the proposed repository horizon are used to estimate the volume of calcite at each site of calcite and/or opal deposition. By estimating the volume of water required to precipitate the measured volumes of calcite in the unsaturated zone, seepage rates of 0.005 to 5 liters/year (l/year) are calculated at the median and 95th percentile of the measured volumes, respectively. These seepage rates are at the low end of the range of seepage rates from recent performance assessment (PA) calculations, confirming the conservative nature of the performance assessment. However, the distribution of the calcite and opal coatings indicate that a much larger fraction of the potential waste packages would be contacted by this seepage than is calculated in the performance assessment.
Volume and Mass Estimation of Three-Phase High Power Transformers for Space Applications
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.
2004-01-01
Spacecraft historically have had sub-1kW(sub e), electrical requirements for GN&C, science, and communications: Galileo at 600W(sub e), and Cassini at 900W(sub e), for example. Because most missions have had the same order of magnitude power requirements, the Power Distribution Systems (PDS) use existing, space-qualified technology and are DC. As science payload and mission duration requirements increase, however, the required electrical power increases. Subsequently, this requires a change from a passive energy conversion (solar arrays and batteries) to dynamic (alternator, solar dynamic, etc.), because dynamic conversion has higher thermal and conversion efficiencies, has higher power densities, and scales more readily to higher power levels. Furthermore, increased power requirements and physical distribution lengths are best served with high-voltage, multi-phase AC to maintain distribution efficiency and minimize voltage drops. The generated AC-voltage must be stepped-up (or down) to interface with various subsystems or electrical hardware. Part of the trade-space design for AC distribution systems is volume and mass estimation of high-power transformers. The volume and mass are functions of the power rating, operating frequency, the ambient and allowable temperature rise, the types and amount of heat transfer available, the core material and shape, the required flux density in a core, the maximum current density, etc. McLyman has tabulated the performance of a number of transformers cores and derived a "cookbook" methodology to determine the volume of transformers, whereas Schawrze had derived an empirical method to estimate the mass of single-phase transformers. Based on the work of McLyman and Schwarze, it is the intent herein to derive an empirical solution to the volume and mass estimation of three-phase, laminated EI-core power transformers, having radiated and conducted heat transfer mechanisms available. Estimation of the mounting hardware, connectors
Goovaerts, H G; Faes, T J; de Valk-de Roo, G W; ten Bolscher, M; Netelenbosch, J C; van der Vijgh, W J; Heethaar, R M
1998-11-01
In order to determine body fluid shifts between the intra- and extra-cellular spaces, multifrequency impedance measurement is performed. According to the Cole-Cole extrapolation, lumped values of intra- and extra-cellular conduction can be estimated which are commonly expressed in resistances Ri and Re respectively. For this purpose the magnitude and phase of the impedance under study are determined at a number of frequencies in the range between 5 kHz and 1 MHz. An approach to determine intra- and extra-cellular conduction on the basis of Bode analysis is presented in this article. On this basis, estimation of the ratio between intra- and extra-cellular conduction could be performed by phase measurement only, midrange in the bandwidth of interest. An important feature is that the relation between intra- and extra-cellular conduction can be continuously monitored by phase measurement and no curve fitting whatsoever is required. Based on a two frequency measurement determining Re at 4 kHz and phi(max) at 64 kHz it proved possible to estimate extra-cellular volume (ECV) more accurately compared with the estimation based on extrapolation according to the Cole-Cole model in 26 patients. Reference values of ECV were determined by sodium bromide. The results show a correlation of 0.90 with the reference method. The average error of ECV estimation was -3.6% (SD 8.4), whereas the Cole-Cole extrapolation showed an error of 13.2% (SD 9.5). An important feature of the proposed approach is that the relation between intra- and extra-cellular conduction can be continuously monitored by phase measurement and no curve fitting whatsoever is required.
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
O'Neill, P.J.
1990-10-01
This paper briefly describes the development of tracer gas techniques. These techniques were introduced over 50 years ago and have evolved into a number of distinct methods. These methods are often tailored to a specific application or to obtain particular information about the flow and volume system. Single-zone techniques are utilized when the structure or zone is relatively well-mixed and can be characterized by a single concentration measurement. Areas or rooms within a single-family residence can sometimes be closely approximated as one well-mixed zone. Multizone techniques are required when the building is composed of two or more zones which communicate with one another through interzonal airflows. Commercial office buildings are usually multizone systems. This paper focuses on single and multiple gas tracer techniques. Traditionally, multizone systems have been analyzed by using a different tracer for each zone. These techniques require equipment capable of accurately injecting and detecting each of the tracers which can be cumbersome in large order systems. Recently, a number of methods have been proposed which use a single tracer gas to estimate flow and effective volumes in multizone systems. 24 refs., 2 figs., 1 tab.
Mello, Beatriz; Schrago, Carlos G
2014-01-01
Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
Detection and Volume Estimation of Large Landslides by Using Multi-temporal Remote Sensing Data
NASA Astrophysics Data System (ADS)
Hsieh, Yu-chung; Hou, Chin-Shyong; Chan, Yu-Chang; Hu, Jyr-Ching; Fei, Li-Yuan; Chen, Hung-Jen; Chiu, Cheng-Lung
2014-05-01
Large landslides are frequently triggered by strong earthquakes and heavy rainfalls in the mountainous areas of Taiwan. The heavy rainfall brought by the Typhoon Morakot has triggered a large amount of landslides. The most unfortunate case occurred in the Xiaolin village, which was totally demolished by a catastrophic landslide in less than a minute. Continued and detailed study of the characteristics of large landslides is urgently needed to mitigate loss of lives and properties in the future. Traditionally known techniques cannot effectively extract landslide parameters, such as depth, amount and volume, which are essential in all the phases of landslide assessment. In addition, it is very important to record the changes of landslide deposits after the landslide events as accurately as possible to better understand the landslide erosion process. The acquisition of digital elevation models (DEMs) is considered necessary for achieving accurate, effective and quantitative landslide assessments. A new technique is presented in this study for quickly assessing extensive areas of large landslides. The technique uses DEMs extracted from several remote sensing approaches, including aerial photogrammetry, airborne LiDAR and UAV photogrammetry. We chose a large landslide event that occurred after Typhoon Sinlaku in Meiyuan the mount, central Taiwan in 2008. We collected and processed six data sets, including aerial photos, airborne LiDAR data and UAVphotos, at different times from 2005 to 2013. Our analyses show the landslide volume being 17.14 × 106 cubic meters, deposition volume being 12.75 × 106 cubic meters, and about 4.38 × 106 cubic meters being washed out of the region. Residual deposition ratio of this area is about 74% in 2008; while, after a few years, the residual deposition ratio is down below 50%. We also analyzed riverbed changes and sediment transfer patterns from 2005 to 2013 by multi-temporal remote sensing data with desirable accuracy. The developed
How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates
ERIC Educational Resources Information Center
Otterbach, Steffen; Sousa-Poza, Alfonso
2010-01-01
This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…
Haasl, Ryan J; Payseur, Bret A
2010-12-01
Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.
NASA Astrophysics Data System (ADS)
Martínez-Sánchez, J.; Puente, I.; GonzálezJorge, H.; Riveiro, B.; Arias, P.
2016-06-01
When ground conditions are weak, particularly in free formed tunnel linings or retaining walls, sprayed concrete can be applied on the exposed surfaces immediately after excavation for shotcreting rock outcrops. In these situations, shotcrete is normally applied conjointly with rock bolts and mesh, thereby supporting the loose material that causes many of the small ground falls. On the other hand, contractors want to determine the thickness and volume of sprayed concrete for both technical and economic reasons: to guarantee their structural strength but also, to not deliver excess material that they will not be paid for. In this paper, we first introduce a terrestrial LiDAR-based method for the automatic detection of rock bolts, as typically used in anchored retaining walls. These ground support elements are segmented based on their geometry and they will serve as control points for the co-registration of two successive scans, before and after shotcreting. Then we compare both point clouds to estimate the sprayed concrete thickness and the expending volume on the wall. This novel methodology is demonstrated on repeated scan data from a retaining wall in the city of Vigo (Spain), resulting in a rock bolts detection rate of 91%, that permits to obtain a detailed information of the thickness and calculate a total volume of 3597 litres of concrete. These results have verified the effectiveness of the developed approach by increasing productivity and improving previous empirical proposals for real time thickness estimation.
Astrometric telescope facility. Preliminary systems definition study. Volume 3: Cost estimate
NASA Technical Reports Server (NTRS)
Sobeck, Charlie (Editor)
1987-01-01
The results of the Astrometric Telescope Facility (ATF) Preliminary System Definition Study conducted in the period between March and September 1986 are described. The main body of the report consists primarily of the charts presented at the study final review which was held at NASA Ames Research Center on July 30 and 31, 1986. The charts have been revised to reflect the results of that review. Explanations for the charts are provided on the adjoining pages where required. Note that charts which have been changed or added since the review are dated 10/1/86; unchanged charts carry the review date 7/30/86. In addition, a narrative summary is presented of the study results and two appendices. The first appendix is a copy of the ATF Characteristics and Requirements Document generated as part of the study. The second appendix shows the inputs to the Space Station Mission Requirements Data Base submitted in May 1986. The report is issued in three volumes. Volume 1 contains an executive summary of the ATF mission, strawman design, and study results. Volume 2 contains the detailed study information. Volume 3 has the ATF cost estimate, and will have limited distribution.
Estimating retained gas volumes in the Hanford tanks using waste level measurements
Whitney, P.D.; Chen, G.; Gauglitz, P.A.; Meyer, P.A.; Miller, N.E.
1997-09-01
The Hanford site is home to 177 large, underground nuclear waste storage tanks. Safety and environmental concerns surround these tanks and their contents. One such concern is the propensity for the waste in these tanks to generate and trap flammable gases. This report focuses on understanding and improving the quality of retained gas volume estimates derived from tank waste level measurements. While direct measurements of gas volume are available for a small number of the Hanford tanks, the increasingly wide availability of tank waste level measurements provides an opportunity for less expensive (than direct gas volume measurement) assessment of gas hazard for the Hanford tanks. Retained gas in the tank waste is inferred from level measurements -- either long-term increase in the tank waste level, or fluctuations in tank waste level with atmospheric pressure changes. This report concentrates on the latter phenomena. As atmospheric pressure increases, the pressure on the gas in the tank waste increases, resulting in a level decrease (as long as the tank waste is {open_quotes}soft{close_quotes} enough). Tanks with waste levels exhibiting fluctuations inversely correlated with atmospheric pressure fluctuations were catalogued in an earlier study. Additionally, models incorporating ideal-gas law behavior and waste material properties have been proposed. These models explicitly relate the retained gas volume in the tank with the magnitude of the waste level fluctuations, dL/dP. This report describes how these models compare with the tank waste level measurements.
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.
Reinitz, László Z; Bajzik, Gábor; Garamvölgyi, Rita; Petneházy, Örs; Lassó, András; Abonyi-Tóth, Zsolt; Lőrincz, Borbála; Sótonyi, Péter
2015-01-01
Dosages for myelography procedures in dogs are based on a hypothetical proportional relationship between bodyweight and cerebrospinal fluid (CSF) volume. Anecdotal radiographic evidence and recent studies have challenged the existence of such a defined relationship in dogs. The objectives of this prospective cross-sectional study were to describe CSF volumes using magnetic resonance imaging (MRI) in a group of clinically healthy dogs, measure the accuracy of MRI CSF volumes, and compare MRI CSF volumes with dog physical measurements. A sampling perfection with application optimized contrast using different flip-angle evolution MRI examination of the central nervous system was carried out on 12 healthy, male mongrel dogs, aged between 3 and 5 years with a bodyweight range of 7.5-35.0 kg. The images were processed with image analysis freeware (3D Slicer) in order to calculate the volume of extracranial CSF. Cylindrical phantoms of known volume were included in scans and used to calculate accuracy of MRI volume estimates. The accuracy of MRI volume estimates was 99.8%. Extracranial compartment CSF volumes ranged from 20.21 to 44.06 ml. Overall volume of the extracranial CSF increased linearly with bodyweight, but the proportional volume (ml/bodyweight kilograms) of the extracranial CSF was inversely proportional to bodyweight. Relative ratios of volumes in the cervical, thoracic, and lumbosacral regions were constant. Findings indicated that the current standard method of using body weight to calculate dosages of myelographic contrast agents in dogs may need to be revised. PMID:26311617
Intramyocardial capillary blood volume estimated by whole-body CT: validation by micro-CT
NASA Astrophysics Data System (ADS)
Dong, Yue; Beighley, Patricia E.; Eaker, Diane R.; Zamir, Mair; Ritman, Erik L.
2008-03-01
Fast CT has shown that myocardial perfusion (F) is related to myocardial intramuscular blood volume (Bv) as Bv=A*F+B*F 1/2 where A,B are constant coefficients. The goal of this study was to estimate the range of diameters of the vessels that are represented by the A*F term. Pigs were placed in an Electron Beam CT (EBCT) scanner for a perfusion CT scan sequence over 40 seconds after an IV contrast agent injection. Intramyocardial blood volume (Bv) and flow (F) were calculated in a region of the myocardium perfused by the LAD. Coefficients A and B were estimated over the range of F=1-5ml/g/min. After the CT scan, the LAD was injected with Microfil (R) contrast agent following which the myocardium was scanned by micro-CT at 20μm, 4μm and 2.5 μm cubic voxel resolutions. The Bv of the intramyocardial vessels was calculated for diameter ranges d=0-5, 5-10, 10-15, 15-20μm, etc. EBCT-derived data were presented so that it could be directly compared the micro-CT data. The results indicated that the blood in vessels less than 10μm in lumen diameter occupied 0.27-0.42 of total intravascular blood volume, which is in good agreement with EBCT-based values 0.28-0.48 (R2 =0.96). We conclude that whole-body CT image data obtained during the passage of a bolus of IV contrast agent can provide a measure of the intramyocardial intracapillary blood volume.
Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E
2014-03-01
Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for
Acceleration of Tomo-PIV by estimating the initial volume intensity distribution
NASA Astrophysics Data System (ADS)
Worth, N. A.; Nickels, T. B.
2008-11-01
Tomographic particle image velocimetry (Tomo-PIV) is a promising new PIV technique. However, its high computational costs often make time-resolved measurements impractical. In this paper, a new preprocessing method is proposed to estimate the initial volume intensity distribution. This relatively inexpensive “first guess” procedure significantly reduces the computational costs, accelerates solution convergence, and can be used directly to obtain results up to 35 times faster than an iterative reconstruction algorithm (with only a slight accuracy penalty). Reconstruction accuracy is also assessed by examining the errors in recovering velocity fields from artificial data (rather than errors in the particle reconstructions themselves).
Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.
Gingerich, W.H.; Pityer, R.A.; Rach, J.J.
1987-01-01
Total blood volume and relative blood volumes in selected tissues were determined in non-anesthetized, confined rainbow trout by using super(51)Cr-labelled trout erythrocytes as a vascular space marker. Mean total blood volume was estimated to be 4.09 plus or minus 0.55 ml/100 g, or about 75% of that estimated with the commonly used plasma space marker Evans blue dye. Relative tissue blood volumes were greatest in highly perfused tissues such as kidney, gills, brain and liver and least in mosaic muscle. Estimates of tissue vascular spaces, made using radiolabelled erythrocytes, were only 25-50% of those based on plasma space markers. The consistently smaller vascular volumes obtained with labelled erythrocytes could be explained by assuming that commonly used plasma space markers diffuse from the vascular compartment.
Smith, S. Jerrod
2013-01-01
From the 1890s through the 1970s the Picher mining district in northeastern Ottawa County, Oklahoma, was the site of mining and processing of lead and zinc ore. When mining ceased in about 1979, as much as 165–300 million tons of mine tailings, locally referred to as “chat,” remained in the Picher mining district. Since 1979, some chat piles have been mined for aggregate materials and have decreased in volume and mass. Currently (2013), the land surface in the Picher mining district is covered by thousands of acres of chat, much of which remains on Indian trust land owned by allottees. The Bureau of Indian Affairs manages these allotted lands and oversees the sale and removal of chat from these properties. To help the Bureau of Indian Affairs better manage the sale and removal of chat, the U.S. Geological Survey, in cooperation with the Bureau of Indian Affairs, estimated the 2005 and 2010 volumes and masses of selected chat piles remaining on allotted lands in the Picher mining district. The U.S. Geological Survey also estimated the changes in volume and mass of these chat piles for the period 2005 through 2010. The 2005 and 2010 chat-pile volume and mass estimates were computed for 34 selected chat piles on 16 properties in the study area. All computations of volume and mass were performed on individual chat piles and on groups of chat piles in the same property. The Sooner property had the greatest estimated volume (4.644 million cubic yards) and mass (5.253 ± 0.473 million tons) of chat in 2010. Five of the selected properties (Sooner, Western, Lawyers, Skelton, and St. Joe) contained estimated chat volumes exceeding 1 million cubic yards and estimated chat masses exceeding 1 million tons in 2010. Four of the selected properties (Lucky Bill Humbah, Ta Mee Heh, Bird Dog, and St. Louis No. 6) contained estimated chat volumes of less than 0.1 million cubic yards and estimated chat masses of less than 0.1 million tons in 2010. The total volume of all
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.
Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L
2015-03-01
Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique.
Estimating Volume, Biomass, and Carbon in Hedmark County, Norway Using a Profiling LiDAR
NASA Technical Reports Server (NTRS)
Nelson, Ross; Naesset, Erik; Gobakken, T.; Gregoire, T.; Stahl, G.
2009-01-01
A profiling airborne LiDAR is used to estimate the forest resources of Hedmark County, Norway, a 27390 square kilometer area in southeastern Norway on the Swedish border. One hundred five profiling flight lines totaling 9166 km were flown over the entire county; east-west. The lines, spaced 3 km apart north-south, duplicate the systematic pattern of the Norwegian Forest Inventory (NFI) ground plot arrangement, enabling the profiler to transit 1290 circular, 250 square meter fixed-area NFI ground plots while collecting the systematic LiDAR sample. Seven hundred sixty-three plots of the 1290 plots were overflown within 17.8 m of plot center. Laser measurements of canopy height and crown density are extracted along fixed-length, 17.8 m segments closest to the center of the ground plot and related to basal area, timber volume and above- and belowground dry biomass. Linear, nonstratified equations that estimate ground-measured total aboveground dry biomass report an R(sup 2) = 0.63, with an regression RMSE = 35.2 t/ha. Nonstratified model results for the other biomass components, volume, and basal area are similar, with R(sup 2) values for all models ranging from 0.58 (belowground biomass, RMSE = 8.6 t/ha) to 0.63. Consistently, the most useful single profiling LiDAR variable is quadratic mean canopy height, h (sup bar)(sub qa). Two-variable models typically include h (sup bar)(sub qa) or mean canopy height, h(sup bar)(sub a), with a canopy density or a canopy height standard deviation measure. Stratification by productivity class did not improve the nonstratified models, nor did stratification by pine/spruce/hardwood. County-wide profiling LiDAR estimates are reported, by land cover type, and compared to NFI estimates.
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
Improved estimates for the role of grey matter volume and GABA in bistable perception.
Sandberg, Kristian; Blicher, Jakob Udby; Del Pin, Simon Hviid; Andersen, Lau Møller; Rees, Geraint; Kanai, Ryota
2016-10-01
Across a century or more, ambiguous stimuli have been studied scientifically because they provide a method for studying the internal mechanisms of the brain while ensuring an unchanging external stimulus. In recent years, several studies have reported correlations between perceptual dynamics during bistable perception and particular brain characteristics such as the grey matter volume of areas in the superior parietal lobule (SPL) and the relative GABA concentration in the occipital lobe. Here, we attempt to replicate previous results using similar paradigms to those used in the studies first reporting the correlations. Using the original findings as priors for Bayesian analyses, we found strong support for the correlation between structure-from-motion percept duration and anterior SPL grey matter volume. Correlations between percept duration and other parietal areas as well as occipital GABA, however, were not directly replicated or appeared less strong than previous studies suggested. Inspection of the posterior distributions (current "best guess" based on new data given old data as prior) revealed that several original findings may reflect true relationships although no direct evidence was found in support of them in the current sample. Additionally, we found that multiple regression models based on grey matter volume at 2-3 parietal locations (but not including GABA) were the best predictors of percept duration, explaining approximately 35% of the inter-individual variance. Taken together, our results provide new estimates of correlation strengths, generally increasing confidence in the role of the aSPL while decreasing confidence in some of the other relationships. PMID:27639213
Estimating Wood Volume for Pinus Brutia Trees in Forest Stands from QUICKBIRD-2 Imagery
NASA Astrophysics Data System (ADS)
Patias, Petros; Stournara, Panagiota
2016-06-01
Knowledge of forest parameters, such as wood volume, is required for a sustainable forest management. Collecting such information in the field is laborious and even not feasible in inaccessible areas. In this study, tree wood volume is estimated utilizing remote sensing techniques, which can facilitate the extraction of relevant information. The study area is the University Forest of Taxiarchis, which is located in central Chalkidiki, Northern Greece and covers an area of 58km2. The tree species under study is the conifer evergreen species P. brutia (Calabrian pine). Three plot surfaces of 10m radius were used. VHR Quickbird-2 images are used in combination with an allometric relationship connecting the Tree Crown with the Diameter at breast height (Dbh), and a volume table developed for Greece. The overall methodology is based on individual tree crown delineation, based on (a) the marker-controlled watershed segmentation approach and (b) the GEographic Object-Based Image Analysis approach. The aim of the first approach is to extract separate segments each of them including a single tree and eventual lower vegetation, shadows, etc. The aim of the second approach is to detect and remove the "noisy" background. In the application of the first approach, the Blue, Green, Red, Infrared and PCA-1 bands are tested separately. In the application of the second approach, NDVI and image brightness thresholds are utilized. The achieved results are evaluated against field plot data. Their observed difference are between -5% to +10%.
Estimation of Residual Peritoneal Volume Using Technetium-99m Sulfur Colloid Scintigraphy.
Katopodis, Konstantinos P; Fotopoulos, Andrew D; Balafa, Olga C; Tsiouris, Spyridon Th; Triandou, Eleni G; Al-Bokharhli, Jichad B; Kitsos, Athanasios C; Dounousi, Evagelia C; Siamopoulos, Konstantinos C
2015-01-01
Residual peritoneal volume (RPV) may contribute in the development of ultrafiltration failure in patients with normal transcapillary ultrafiltration. The aim of this study was to estimate the RPV using intraperitoneal technetium-99m Sulfur Colloid (Tc). Twenty patients on peritoneal dialysis were studied. RPV was estimated by: 1) intraperitoneal instillation of Tc (RPV-Tc) and 2) classic Twardowski calculations using endogenous solutes, such as urea (RPV-u), creatinine (RPV-cr), and albumin (RPV-alb). Each method's reproducibility was assessed in a subgroup of patients in two consecutive measurements 48 h apart. Both methods displayed reproducibility (r = 0.93, p = 0.001 for RPVTc and r = 0.90, p = 0.001 for RPV-alb) between days 1 and 2, respectively. We found a statistically significant difference between RPV-Tc and RPV-cr measurements (347.3 ± 116.7 vs. 450.0 ± 67.8 ml; p =0.001) and RPV-u (515.5 ± 49.4 ml; p < 0.001), but not with RPV-alb (400.1 ± 88.2 ml; p = 0.308). A good correlation was observed only between RPV-Tc and RPV-alb (p < 0.001). The Tc method can estimate the RPV as efficiently as the high molecular weight endogenous solute measurement method. It can also provide an imaging estimate of the intraperitoneal distribution of RPV.
Wille, Marie-Luise; Langton, Christian M
2016-02-01
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R(2)=99.9% and R(2)=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment.
Wille, Marie-Luise; Langton, Christian M
2016-02-01
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R(2)=99.9% and R(2)=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment. PMID:26455950
Limitations of Stroke Volume Estimation by Non-Invasive Blood Pressure Monitoring in Hypergravity
2015-01-01
Background Altitude and gravity changes during aeromedical evacuations induce exacerbated cardiovascular responses in unstable patients. Non-invasive cardiac output monitoring is difficult to perform in this environment with limited access to the patient. We evaluated the feasibility and accuracy of stroke volume estimation by finger photoplethysmography (SVp) in hypergravity. Methods Finger arterial blood pressure (ABP) waveforms were recorded continuously in ten healthy subjects before, during and after exposure to +Gz accelerations in a human centrifuge. The protocol consisted of a 2-min and 8-min exposure up to +4 Gz. SVp was computed from ABP using Liljestrand, systolic area, and Windkessel algorithms, and compared with reference values measured by echocardiography (SVe) before and after the centrifuge runs. Results The ABP signal could be used in 83.3% of cases. After calibration with echocardiography, SVp changes did not differ from SVe and values were linearly correlated (p<0.001). The three algorithms gave comparable SVp. Reproducibility between SVp and SVe was the best with the systolic area algorithm (limits of agreement −20.5 and +38.3 ml). Conclusions Non-invasive ABP photoplethysmographic monitoring is an interesting technique to estimate relative stroke volume changes in moderate and sustained hypergravity. This method may aid physicians for aeronautic patient monitoring. PMID:25798613
Volume estimation of tonsil phantoms using an oral camera with 3D imaging.
Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh
2016-04-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations.
Volume estimation of tonsil phantoms using an oral camera with 3D imaging
Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh
2016-01-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
Volume estimation of tonsil phantoms using an oral camera with 3D imaging.
Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh
2016-04-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
A novel optical method for estimating the near-wall volume fraction in granular flows
NASA Astrophysics Data System (ADS)
Sarno, Luca; Nicolina Papa, Maria; Carleo, Luigi; Tai, Yih-Chin
2016-04-01
Geophysical phenomena, such as debris flows, pyroclastic flows and rock avalanches, involve the rapid flow of granular mixtures. Today the dynamics of these flows is far from being deeply understood, due to their huge complexity compared to clear water or monophasic fluids. To this regard, physical models at laboratory scale represent important tools for understanding the still unclear properties of granular flows and their constitutive laws, under simplified experimental conditions. Beside the velocity and the shear rate, the volume fraction is also strongly interlinked with the rheology of granular materials. Yet, a reliable estimation of this quantity is not easy through non-invasive techniques. In this work a novel cost-effective optical method for estimating the near-wall volume fraction is presented and, then, applied to a laboratory study on steady-state granular flows. A preliminary numerical investigation, through Monte-Carlo generations of grain distributions under controlled illumination conditions, allowed to find the stochastic relationship between the near-wall volume fraction, c3D, and a measurable quantity (the two-dimensional volume fraction), c2D, obtainable through an appropriate binarization of gray-scale images captured by a camera placed in front of the transparent boundary. Such a relation can be well described by c3D = aexp(bc2D), with parameters only depending on the angle of incidence of light, ζ. An experimental validation of the proposed approach is carried out on dispersions of white plastic grains, immersed in various ambient fluids. The mixture, confined in a box with a transparent window, is illuminated by a flickering-free LED lamp, placed so as to form a given ζ with the measuring surface, and is photographed by a camera, placed in front of the same window. The predicted exponential law is found to be in sound agreement with experiments for a wide range of ζ (10° <ζ<45°). The technique is, then, applied to steady-state dry
Herzog, Mark; Ackerman, Josh; Eagles-Smith, Collin A.; Hartman, Christopher
2016-01-01
In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster’s tern (Sterna forsteri). Egg densities (g/cm3) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v = 0.491 ± 0.001; K w = 0.518 ± 0.001) or excluded (K v = 0.493 ± 0.001; K w = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6–13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .
Herzog, Mark P; Ackerman, Joshua T; Eagles-Smith, Collin A; Hartman, C Alex
2016-05-01
In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster's tern (Sterna forsteri). Egg densities (g/cm(3)) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v = 0.491 ± 0.001; K w = 0.518 ± 0.001) or excluded (K v = 0.493 ± 0.001; K w = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6-13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .
Elliott, John G.; Flynn, Jennifer L.; Bossong, Clifford R.; Char, Stephen J.
2011-01-01
The subwatersheds with the greatest potential postwildfire and postprecipitation hazards are those with both high probabilities of debris-flow occurrence and large estimated volumes of debris-flow material. The high probabilities of postwildfire debris flows, the associated large estimated debris-flow volumes, and the densely populated areas along the creeks and near the outlets of the primary watersheds indicate that Indiana, Pennsylvania, and Spruce Creeks are associated with a relatively high combined debris-flow hazard.
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion
Park, Ki Nam; Kang, Kyung Yoon; Hong, Hyun Sook; Jeong, Han-Shin; Lee, Seung Won
2015-11-01
The clinical and prognostic value of tumor volume in various solid tumors has been investigated. However, there have been few studies on the clinical impact of tumor volume in papillary thyroid carcinoma (PTC). This study was performed to investigate the predictive value of estimated tumor volume measured by ultrasonography for occult central neck metastasis (OCNM) of PTC. A total of 264 patients with clinically node-negative PTC on ultrasonography and computed tomography who underwent total thyroidectomy in conjunction with at least ipsilateral prophylactic central neck dissection were enrolled in this study. Tumor volume was derived with the formula used to calculate ellipsoids from two orthogonal scans during 2-D ultrasonography at initial aspiration biopsy. We retrospectively evaluated demographic characteristics, pre-operative ultrasonographic features (tumor size, volume and multifocality) and pathologic results. The OCNM rate was 35.6%; estimated tumor volume was used to predict OCNM (p = 0.035). At 0.385 mL, sensitivity and specificity were 51.1% and 66.5%, and the area under the curve for OCNM detection was 0.610. In multivariate analysis, tumor volume, but not size, was an independent predictive factor for OCNM (odds ratio = 1.83, p = 0.029). The other factors were extrathyroidal extension (odds ratio = 2.39, p = 0.004) and male gender (odds ratio = 3.90, p < 0.001). The estimated tumor volume of PTC measured by ultrasonography could be a pre-operative predictor of OCNM.
Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng
2015-01-01
Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433
Water volume estimates of the Greenland Perennial Firn Aquifer from in situ measurements
NASA Astrophysics Data System (ADS)
Koenig, L.; Miege, C.; Forster, R. R.; Brucker, L.
2013-12-01
Improving our understanding of the complex Greenland hydrologic system is necessary for assessing change across the Greenland Ice Sheet and its contribution to sea level rise (SLR). A new component of the Greenland hydrologic system, a Perennial Firn Aquifer (PFA), was recently discovered in April 2011. The PFA represents a large storage of liquid water within the Greenland Ice Sheet with an area of 70,000 × 10,000 km2 simulated by the RACMO2/GR regional climate model which closely follows airborne radar-derived mapping (Forster et al., in press). The average top surface depth of the PFA as detected by radar is 23 m. In April 2013, our team drilled through the PFA for the first time to gain an understanding of firn structure constraining the PFA, to estimate the water volume within the PFA, and to measure PFA temperatures and densities. At our drill site in Southeast Greenland (~100 km Northwest of Kulusuk), water fills or partially fills the available firn pore space from depths of ~12 to 37 m. The temperature within the PFA depths is constant at 0.1 × 0.1° C while the 12 m of seasonally dry firn above the PFA has a temperature profile dominated by surface temperature forcing. Near the bottom of the PFA water completely fills available pore space as the firn is compressed to ice entrapping water filled bubbles, as opposed to air filled bubbles, which then start to refreeze. A PFA maximum density is reached as the water filling the pore space, increasing density, begins refreezing back into ice at a lower density. We define this depth as the pore water refreeze depth and use this depth as the bottom of the PFA to calculate volume. It is certain, however that a small amount of water does exist below this depth, which we do not account for. The density profile obtained from the ACT11B firn core, the closest seasonally dry firn core, is compared to both gravitational densities and high resolution densities derived from a neutron density probe at the PFA site. The
MCNP ESTIMATE OF THE SAMPLED VOLUME IN A NON-DESTRUCTIVE IN SITU SOIL CARBON ANALYSIS.
WIELOPOLSKI, L.; DIOSZEGI, I.; MITRA, S.
2004-05-03
Global warming, promoted by anthropogenic CO{sub 2} emission into the atmosphere, is partially mitigated by the photosynthesis processes of the terrestrial echo systems that act as atmospheric CO{sub 2} scrubbers and sequester carbon in soil. Switching from till to no till soils management practices in agriculture further augments this process. Carbon sequestration is also advanced by putting forward a carbon ''credit'' system whereby these can be traded between CO{sub 2} producers and sequesters. Implementation of carbon ''credit'' trade will be further promulgated by recent development of a non-destructive in situ carbon monitoring system based on inelastic neutron scattering (INS). Volumes and depth distributions defined by the 0.1, 1.0, 10, 50, and 90 percent neutron isofluxes, from a point source located at either 5 or 30 cm above the surface, were estimated using Monte Carlo calculations.
Scatter to volume registration for model-free respiratory motion estimation from dynamic MRIs.
Miao, S; Wang, Z J; Pan, L; Butler, J; Moran, G; Liao, R
2016-09-01
Respiratory motion is one major complicating factor in many image acquisition applications and image-guided interventions. Existing respiratory motion estimation and compensation methods typically rely on breathing motion models learned from certain training data, and therefore may not be able to effectively handle intra-subject and/or inter-subject variations of respiratory motion. In this paper, we propose a respiratory motion compensation framework that directly recovers motion fields from sparsely spaced and efficiently acquired dynamic 2-D MRIs without using a learned respiratory motion model. We present a scatter-to-volume deformable registration algorithm to register dynamic 2-D MRIs with a static 3-D MRI to recover dense deformation fields. Practical considerations and approximations are provided to solve the scatter-to-volume registration problem efficiently. The performance of the proposed method was investigated on both synthetic and real MRI datasets, and the results showed significant improvements over the state-of-art respiratory motion modeling methods. We also demonstrated a potential application of the proposed method on MRI-based motion corrected PET imaging using hybrid PET/MRI.
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Loubère, Raphaël
2016-08-01
In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Estimating the global volume of deeply recycled continental crust at continental collision zones
NASA Astrophysics Data System (ADS)
Scholl, D. W.; Huene, R. V.
2006-12-01
CRUSTAL RECYCLING AT OCEAN MARGINS: Large volumes of rock and sediment are missing from the submerged forearcs of ocean margin subduction zones--OMSZs. This observation means that (1) oceanic sediment is transported beneath the margin to either crustally underplate the coastal region or reach mantle depths, and that (2) the crust of the forearc is vertically thinned and horizontally truncated and the removed material transported toward the mantle. Transport of rock and sediment debris occurs in the subduction channel that separates the upper and lower plates. At OMSZs the solid-volume flux of recycling crustal material is estimated to be globally ~2.5 km3/yr (i.e., 2.5 Armstrong units or AU). The corresponding rate of forearc truncation (migration of the trench axis toward a fix reference on the continent) is a sluggish 2-3 km/Myr (about 1/50th the orthogonal convergence rate). Nonetheless during the past 2.5 Gyr (i.e., since the beginning of the Proterozoic) a volume of continental material roughly equal to the existing volume (~7 billion cubic km) has been recycled to the mantle at OMSZs. The amount of crust that has been destroyed is so large that recycling must have been a major factor creating the mapped rock pattern and age-fabric of continental crust. RECYCLING AT CONTINENT/ARC COLLISIONS: The rate at which arc magmatism globally adds juvenile crust to OMSZs has been commonly globally estimated at ~1 AU. But new geophysical and dating information from the Aleutian and IBM arcs imply that the addition rate is at least ~5 AU (equivalent to ~125 km3/Myr/km of arc). If the Armstrong posit is correct that since the early Archean a balance has existed between additions and losses of crust, then a recycling sink for an additional 2-3 AU of continental material must exist. As the exposure of exhumed masses of high P/T blueschist bodies documents that subcrustal streaming of continental material occurs at OMSZs, so does the occurrence of exhumed masses of UHP
Arima, K.; Sugimura, Y.; Hioki, T.; Yamashita, A.; Kawamura, J.
1997-01-01
Although different histological grading systems of prostatic cancer refer to well-described characteristics, results are hard to reproduce. The aim of this study was to obtain morphometric data that would enable objective and reproducible grading of prostatic cancers by stereological estimation of mean nuclear volume (MNV). The clinical records and tissue specimens from 100 patients who were newly diagnosed as having prostatic cancer from 1973 to 1990 and who were followed up for 5 years or longer were retrospectively examined. We analysed the relationship between MNV and clinical stage, Gleason score and histological grading according to the World Health Organization (WHO) classification. To evaluate prognostic predictors, a multivariate analysis of factors associated with cause-specific survival was performed. We found a good correlation between the MNV and clinical stage and between the MNV and histological grading. There was no correlation between MNVs and Gleason scores. Multivariate analysis revealed that the MNV was the only predictor of survival time (coefficient 0.005; P < 0.0001; hazard ratio 1.005). We consider that the MNV is an excellent predictor of the prognosis in patients with prostatic cancer. Moreover, stereological estimation of MNV is a simple, quick, inexpensive and reliable morphometric procedure that enables the quantitative analysis of the histological and biological character of prostatic cancer. PMID:9231924
Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz
2009-09-01
The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform. PMID:19328585
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
NASA Astrophysics Data System (ADS)
Kaur, Jasmeet; Nandy, D. K.; Arora, Bindiya; Sahoo, B. K.
2015-01-01
Accurate knowledge of interaction potentials among the alkali-metal atoms and alkaline-earth ions is very useful in the studies of cold atom physics. Here we carry out theoretical studies of the long-range interactions among the Li, Na, K, and Rb alkali-metal atoms with the Ca+, Ba+, Sr+, and Ra+ alkaline-earth ions systematically, which are largely motivated by their importance in a number of applications. These interactions are expressed as a power series in the inverse of the internuclear separation R . Both the dispersion and induction components of these interactions are determined accurately from the algebraic coefficients corresponding to each power combination in the series. Ultimately, these coefficients are expressed in terms of the electric multipole polarizabilities of the above-mentioned systems, which are calculated using the matrix elements obtained from a relativistic coupled-cluster method and core contributions to these quantities from the random-phase approximation. We also compare our estimated polarizabilities with the other available theoretical and experimental results to verify accuracies in our calculations. In addition, we also evaluate the lifetimes of the first two low-lying states of the ions using the above matrix elements. Graphical representations of the dispersion coefficients versus R are given among all the alkaline ions with Rb.
NASA Astrophysics Data System (ADS)
Yu, Ting-To
2013-04-01
It is important to acquire the volume of landslide in short period of time. For hazard mitigation and also emergency response purpose, the traditional method takes much longer time than expected. Due to the weather limit, traffic accessibility and many regulations of law, it take months to handle these process before the actual carry out of filed work. Remote sensing imagery can get the data as long as the visibility allowed, which happened only few day after the event. While traditional photometry requires a stereo pairs images to produce the post event DEM for calculating the change of volume. Usually have to wait weeks or even months for gathering such data, LiDAR or ground GPS measurement might take even longer period of time with much higher cost. In this study we use one post event satellite image and pre-event DTM to compare the similarity between these by alter the DTM with genetic algorithms. The outcome of smartest guess from GAs shall remove or add exact values of height at each location, which been converted into shadow relief viewgraph to compare with satellite image. Once the similarity threshold been make then the guessing work stop. It takes only few hours to finish the entire task, the computed accuracy is around 70% by comparing to the high resolution LiDAR survey at a landslide, southern Taiwan. With extra GCPs, the estimate accuracy can improve to 85% and also within few hours after the receiving of satellite image. Data of this demonstration case is a 5 m DTM at 2005, 2M resolution FormoSat optical image at 2009 and 5M LiDAR at 2010. The GAs and image similarity code is developed on Matlab at windows PC.
NASA Astrophysics Data System (ADS)
Takagi, Hideo D.; Swaddle, Thomas W.
1996-01-01
The outer-sphere contribution to the volume of activation of homogeneous electron exchange reactions is estimated for selected solvents on the basis of the mean spherical approximation (MSA), and the calculated values are compared with those estimated by the Strank-Hush-Marcus (SHM) theory and with activation volumes obtained experimentally for the electron exchange reaction between tris(hexafluoroacetylacetonato)ruthenium(III) and -(II) in acetone, acetonitrile, methanol and chloroform. The MSA treatment, which recognizes the molecular nature of the solvent, does not improve significantly upon the continuous-dielectric SHM theory, which represents the experimental data adequately for the more polar solvents.
Estimation of the possible flood discharge and volume of stormwater for designing water storage.
Kirzhner, Felix; Kadmon, Avri
2011-01-01
The shortage of good-quality water resources is an important issue in arid and semiarid zones. Stormwater-harvesting systems that are capable of delivering good-quality wastewater for non-potable uses while taking into account environmental and health requirements must be developed. For this reason, the availability of water resources of marginal quality, like stormwater, can be a significant contribution to the water supply. Current stormwater management practices in the world require the creation of control systems that monitor quality and quantity of the water and the development of stormwater basins to store increased runoff volumes. Public health and safety considerations should be considered. Urban and suburban development, with the creation of buildings and roads and innumerable related activities, turns rain and snow into unwitting agents of damage to our nation's waterways. This urban and suburban runoff, legally known as stormwater, is one of the most significant sources of water pollution in the world. Based on various factors like water quality, runoff flow rate and speed, and the topography involved, stormwater can be directed into basins, purification plants, or to the sea. Accurate floodplain maps are the key to better floodplain management. The aim of this work is to use geographic information systems (GIS) to monitor and control the effect of stormwater. The graphic and mapping capabilities of GIS provide strong tools for conveying information and forecasts of different storm-water flow and buildup scenarios. Analyses of hydrologic processes, rainfall simulations, and spatial patterns of water resources were performed with GIS, which means, based on integrated data set, the flow of the water was introduced into the GIS. Two cases in Israel were analyzed--the Hula Project (the Jordan River floods over the peat soil area) and the Kishon River floodplains as it existed in the Yizrael Valley.
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.
Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H
2016-03-18
A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.
Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H
2016-01-01
A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera
Reyes, Bersain A.; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H.
2016-01-01
A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152
Butlin, Mark; Qasem, Ahmad; Avolio, Alberto P
2012-01-01
There is increasing interest in non-invasive estimation of central aortic waveform parameters in the clinical setting. However, controversy has arisen around radial tonometric based systems due to the requirement of a trained operator or lack of ease of use, especially in the clinical environment. A recently developed device utilizes a novel algorithm for brachial cuff based assessment of aortic pressure values and waveform (SphygmoCor XCEL, AtCor Medical). The cuff was inflated to 10 mmHg below an individual's diastolic blood pressure and the brachial volume displacement waveform recorded. The aortic waveform was derived using proprietary digital signal processing and transfer function applied to the recorded waveform. The aortic waveform was also estimated using a validated technique (radial tonometry based assessment, SphygmoCor, AtCor Medical). Measurements were taken in triplicate with each device in 30 people (17 female) aged 22 to 79 years of age. An average for each device for each individual was calculated, and the results from the two devices were compared using regression and Bland-Altman analysis. A high correlation was found between the devices for measures of aortic systolic (R(2)=0.99) and diastolic (R(2)=0.98) pressure. Augmentation index and subendocardial viability ratio both had a between device R(2) value of 0.82. The difference between devices for measured aortic systolic pressure was 0.5±1.8 mmHg, and for augmentation index, 1.8±7.0%. The brachial cuff based approach, with an individualized sub-diastolic cuff pressure, provides an operator independent method of assessing not only systolic pressure, but also aortic waveform features, comparable to existing validated tonometric-based methods.
A framework of whole heart extracellular volume fraction estimation for low dose cardiac CT images
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Summers, Ronald M.; Nacif, Marcelo Souto; Liu, Songtao; Bluemke, David A.; Yao, Jianhua
2012-02-01
Cardiac magnetic resonance imaging (CMRI) has been well validated and allows quantification of myocardial fibrosis in comparison to overall mass of the myocardium. Unfortunately, CMRI is relatively expensive and is contraindicated in patients with intracardiac devices. Cardiac CT (CCT) is widely available and has been validated for detection of scar and myocardial stress/rest perfusion. In this paper, we sought to evaluate the potential of low dose CCT for the measurement of myocardial whole heart extracellular volume (ECV) fraction. A novel framework was proposed for CCT whole heart ECV estimation, which consists of three main steps. First, a shape constrained graph cut (GC) method was proposed for myocardium and blood pool segmentation for post-contrast image. Second, the symmetric Demons deformable registrations method was applied to register pre-contrast to post-contrast images. Finally, the whole heart ECV value was computed. The proposed method was tested on 7 clinical low dose CCT datasets with pre-contrast and post-contrast images. The preliminary results demonstrated the feasibility and efficiency of the proposed method.
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems.
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M
2015-05-01
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system.
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems.
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M
2015-05-01
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system. PMID:25902504
1995-09-01
The Solid Waste Retrieval Facility--Phase 1 (Project W113) will provide the infrastructure and the facility required to retrieve from Trench 04, Burial ground 4C, contact handled (CH) drums and boxes at a rate that supports all retrieved TRU waste batching, treatment, storage, and disposal plans. This includes (1) operations related equipment and facilities, viz., a weather enclosure for the trench, retrieval equipment, weighing, venting, obtaining gas samples, overpacking, NDE, NDA, shipment of waste and (2) operations support related facilities, viz., a general office building, a retrieval staff change facility, and infrastructure upgrades such as supply and routing of water, sewer, electrical power, fire protection, roads, and telecommunication. Title I design for the operations related equipment and facilities was performed by Raytheon/BNFL, and that for the operations support related facilities including infrastructure upgrade was performed by KEH. These two scopes were combined into an integrated W113 Title II scope that was performed by Raytheon/BNFL. This volume represents the total estimated costs for the W113 facility. Operating Contractor Management costs have been incorporated as received from WHC. The W113 Facility TEC is $19.7 million. This includes an overall project contingency of 14.4% and escalation of 17.4%. A January 2001 construction contract procurement start date is assumed.
Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions
NASA Astrophysics Data System (ADS)
White, Randall; McCausland, Wendy
2016-01-01
We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations
Voxel-Based Approach for Estimating Urban Tree Volume from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Vonderach, C.; Voegtle, T.; Adler, P.
2012-07-01
The importance of single trees and the determination of related parameters has been recognized in recent years, e.g. for forest inventories or management. For urban areas an increasing interest in the data acquisition of trees can be observed concerning aspects like urban climate, CO2 balance, and environmental protection. Urban trees differ significantly from natural systems with regard to the site conditions (e.g. technogenic soils, contaminants, lower groundwater level, regular disturbance), climate (increased temperature, reduced humidity) and species composition and arrangement (habitus and health status) and therefore allometric relations cannot be transferred from natural sites to urban areas. To overcome this problem an extended approach was developed for a fast and non-destructive extraction of branch volume, DBH (diameter at breast height) and height of single trees from point clouds of terrestrial laser scanning (TLS). For data acquisition, the trees were scanned with highest scan resolution from several (up to five) positions located around the tree. The resulting point clouds (20 to 60 million points) are analysed with an algorithm based on voxel (volume elements) structure, leading to an appropriate data reduction. In a first step, two kinds of noise reduction are carried out: the elimination of isolated voxels as well as voxels with marginal point density. To obtain correct volume estimates, the voxels inside the stem and branches (interior voxels) where voxels contain no laser points must be regarded. For this filling process, an easy and robust approach was developed based on a layer-wise (horizontal layers of the voxel structure) intersection of four orthogonal viewing directions. However, this procedure also generates several erroneous "phantom" voxels, which have to be eliminated. For this purpose the previous approach was extended by a special region growing algorithm. In a final step the volume is determined layer-wise based on the extracted
NASA Technical Reports Server (NTRS)
McCurry, J. B.
1995-01-01
The purpose of the TA-2 contract was to provide advanced launch vehicle concept definition and analysis to assist NASA in the identification of future launch vehicle requirements. Contracted analysis activities included vehicle sizing and performance analysis, subsystem concept definition, propulsion subsystem definition (foreign and domestic), ground operations and facilities analysis, and life cycle cost estimation. The basic period of performance of the TA-2 contract was from May 1992 through May 1993. No-cost extensions were exercised on the contract from June 1993 through July 1995. This document is part of the final report for the TA-2 contract. The final report consists of three volumes: Volume 1 is the Executive Summary, Volume 2 is Technical Results, and Volume 3 is Program Cost Estimates. The document-at-hand, Volume 3, provides a work breakdown structure dictionary, user's guide for the parametric life cycle cost estimation tool, and final report developed by ECON, Inc., under subcontract to Lockheed Martin on TA-2 for the analysis of heavy lift launch vehicle concepts.
NASA Astrophysics Data System (ADS)
Schmidt, L. S.; Karlsson, N. B.; Hvidberg, C. S.
2016-09-01
High-resolution images of the martian surface have revealed numerous deposits with complex patterns consistent with the flow of ice. Here we applied ice-flow models and inverse methods to estimate the ice thickness and volume of these deposits.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Bogner, Simon; Rüde, Ulrich; Harting, Jens
2016-04-01
The free surface lattice Boltzmann method (FSLBM) is a combination of the hydrodynamic lattice Boltzmann method with a volume-of-fluid (VOF) interface capturing technique for the simulation of incompressible free surface flows. Capillary effects are modeled by extracting the curvature of the interface from the VOF indicator function and imposing a pressure jump at the free boundary. However, obtaining accurate curvature estimates from a VOF description can introduce significant errors. This article reports numerical results for three different surface tension models in standard test cases and compares the according errors in the velocity field (spurious currents). Furthermore, the FSLBM is shown to be suited to simulate wetting effects at solid boundaries. To this end, a new method is developed to represent wetting boundary conditions in a least-squares curvature reconstruction technique. The main limitations of the current FSLBM are analyzed and are found to be caused by its simplified advection scheme. Possible improvements are suggested. PMID:27176423
Kidoh, Masafumi; Utsunomiya, Daisuke; Oda, Seitaro; Funama, Yoshinori; Yuki, Hideaki; Nakaura, Takeshi; Kai, Noriyuki; Nozaki, Takeshi; Yamashita, Yasuyuki
2015-12-01
Size-specific dose estimate (SSDE) takes into account the patient size but remains to be fully validated for adult coronary computed tomography angiography (CCTA). We investigated the appropriateness of SSDE for accurate estimation of patient dose by comparing the SSDE and the volume CT dose index (CTDIvol) in adult CCTA. This prospective study received institutional review board approval, and informed consent was obtained from each patient. We enrolled 37 adults who underwent CCTA with a 320-row CT. High-sensitivity metal oxide semiconductor field effect transistor dosimeters were placed on the anterior chest. CTDIvol reported by the scanner based on a 32-cm phantom was recorded. We measured chest diameter to convert CTDIvol to SSDE. Using linear regression, we then correlated SSDE with the mean measured skin dose. We also performed linear regression analyses between the skin dose/CTDIvol and the body mass index (BMI), and the skin dose/SSDE and BMI. There was a strong linear correlation (r = 0.93, P < 0.001) between SSDE (mean 37 ± 22 mGy) and mean skin dose (mean 17.7 ± 10 mGy). There was a moderate negative correlation between the skin dose/CTDIvol and BMI (r = 0.45, P < 0.01). The skin dose/SSDE was not affected by BMI (r = 0.06, P > 0.76). SSDE yields a more accurate estimation of the radiation dose without estimation errors attributable to the body size of adult patients undergoing CCTA. PMID:26440660
[Estimation of VOC emission from forests in China based on the volume of tree species].
Zhang, Gang-feng; Xie, Shao-dong
2009-10-15
Applying the volume data of dominant trees from statistics on the national forest resources, volatile organic compounds (VOC) emissions of each main tree species in China were estimated based on the light-temperature model put forward by Guenther. China's VOC emission inventory for forest was established, and the space-time and age-class distributions of VOC emission were analyzed. The results show that the total VOC emissions from forests in China are 8565.76 Gg, of which isoprene is 5689.38 Gg (66.42%), monoterpenes is 1343.95 Gg (15.69%), and other VOC is 1532.43 Gg (17.89%). VOC emissions have significant species variation. Quercus is the main species responsible for emission, contributing 45.22% of the total, followed by Picea and Pinus massoniana with 6.34% and 5.22%, respectively. Southwest and Northeast China are the major emission regions. In specific, Yunnan, Sichuan, Heilongjiang, Jilin and Shaanxi are the top five provinces producing the most VOC emissions from forests, and their contributions to the total are 15.09%, 12.58%, 10.35%, 7.49% and 7.37%, respectively. Emissions from these five provinces occupy more than half (52.88%) of the national emissions. Besides, VOC emissions show remarkable seasonal variation. Emissions in summer are the largest, accounting for 56.66% of the annual. Forests of different ages have different emission contribution. Half-mature forests play a key role and contribute 38.84% of the total emission from forests. PMID:19968092
Richter, Wolfgang F; Grimm, Hans Peter; Theil, Frank-Peter
2011-10-01
The volume of distribution at steady state (Vss) of therapeutic proteins is usually assessed by non-compartmental or compartmental pharmacokinetic (PK) analysis wherein errors may arise due to the elimination of therapeutic proteins from peripheral tissues that are not in rapid equilibrium with the sampling compartment (usually blood). Here we explored another potential source of error in the estimation of Vss that is linked to the heterogeneity of therapeutic proteins which may consist of components (e.g. glycosylation variants) with different elimination rates. PK simulations were performed with such hypothetical binary protein mixtures where elimination was assumed to be exclusively from the central compartment. The simulations demonstrated that binary mixtures containing a rapid-elimination component can give rise to pronounced bi-phasic concentration-time profiles. Apparent Vss observed with both non-compartmental and 2-compartmental PK analysis, increased with increasing fraction as well as with increasing elimination rate k ( 10 ) of the rapid-elimination component. Simulation results were complemented by PK analysis of an in vivo study in cynomolgus monkeys with different lots of lenercept, a tumor necrosis factor receptor-immunoglobulin G1 fusion protein, with different heterogeneities. The comparative Vss data for the three lenercept lots with different amounts of rapidly cleared components were consistent with the outcome of our simulations. Both lots with a higher fraction of rapidly cleared components had a statistically significant higher Vss as compared to the reference lot. Overall our study demonstrates that Vss of a therapeutic protein may be overestimated in proteins with differently eliminated components.
Lear, J.L.; Feyerabend, A.; Gregory, C.
1989-08-01
Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques.
NASA Astrophysics Data System (ADS)
Olivan Bescos, Javier; Slob, Marian; Sluzewski, Menno; van Rooij, Willem J.; Slump, Cornelis H.
2003-05-01
A cerebral aneurysm is a persistent localized dilatation of the wall of a cerebral vessel. One of the techniques applied to treat cerebral aneurysms is the Guglielmi detachable coil (GDC) embolization. The goal of this technique is to embolize the aneurysm with a mesh of platinum coils to reduce the risk of aneurysm rupture. However, due to the blood pressure it is possible that the platinum wire is deformed. In this case, re-embolization of the aneurysm is necessary. The aim of this project is to develop a computer program to estimate the volume of cerebral aneurysms from archived laser hard copies of biplane digital subtraction angiography (DSA) images. Our goal is to determine the influence of the packing percentage, i.e., the ratio between the volume of the aneurysm and the volume of the coil mesh, on the stability of the coil mesh in time. The method we apply to estimate the volume of the cerebral aneurysms is based on the generation of a 3-D geometrical model of the aneurysm from two biplane DSA images. This 3-D model can be seen as an stack of 2-D ellipsis. The volume of the aneurysm is the result of performing a numerical integration of this stack. The program was validated using balloons filled with contrast agent. The availability of 3-D data for some of the aneurysms enabled to perform a comparison of the results of this method with techniques based on 3-D data.
Hulse, R.A.
1991-08-01
Planning for storage or disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) requires characterization of that waste to estimate volumes, radionuclide activities, and waste forms. Data from existing literature, disposal records, and original research were used to estimate the characteristics and project volumes and radionuclide activities to the year 2035. GTCC LLW is categorized as: nuclear utilities waste, sealed sources waste, DOE-held potential GTCC LLW; and, other generator waste. It has been determined that the largest volume of those wastes, approximately 57%, is generated by nuclear power plants. The Other Generator waste category contributes approximately 10% of the total GTCC LLW volume projected to the year 2035. Waste held by the Department of Energy, which is potential GTCC LLW, accounts for nearly 33% of all waste projected to the year 2035; however, no disposal determination has been made for that waste. Sealed sources are less than 0.2% of the total projected volume of GTCC LLW.
NASA Astrophysics Data System (ADS)
Cook, Geoffrey W.; Wolff, John A.; Self, Stephen
2016-02-01
The 1.60 Ma caldera-forming eruption of the Otowi Member of the Bandelier Tuff produced Plinian and coignimbrite fall deposits, outflow and intracaldera ignimbrite, all of it deposited on land. We present a detailed approach to estimating and reconstructing the original volume of the eroded, partly buried large ignimbrite and distal ash-fall deposits. Dense rock equivalent (DRE) volume estimates for the eruption are 89 + 33/-10 km3 of outflow ignimbrite and 144 ± 72 km3 of intracaldera ignimbrite. Also, there was at least 65 km3 (DRE) of Plinian fall when extrapolated distally, and 107 + 40/-12 km3 of coignimbrite ash was "lost" from the outflow sheet to form an unknown proportion of the distal ash fall. The minimum total volume is 216 km3 and the maximum is 550 km3; hence, the eruption overlaps the low end of the super-eruption spectrum (VEI ˜8.0). Despite an abundance of geological data for the Otowi Member, the errors attached to these estimates do not allow us to constrain the proportions of intracaldera (IC), outflow (O), and distal ash (A) to better than a factor of three. We advocate caution in applying the IC/O/A = 1:1:1 relation of Mason et al. (2004) to scaling up mapped volumes of imperfectly preserved caldera-forming ignimbrites.
2012-01-01
Background Although the measurement site at L4–L5 for visceral adipose tissue (VAT) has been commonly accepted, some researchers suggest that additional upper sites (i.e., L1–L2 and L2–L3) are useful for estimating VAT volume. Therefore, determining the optimum measurement site remains challenging and has become important in determining VAT volume. We investigated the influence of a single-slice measurement site on the prediction of VAT volume and changes in VAT volume in obese Japanese men. Methods Twenty-four men, aged 30–65 years with a mean BMI of 30 kg/m2, were included in a 12-week weight loss program. We obtained continuous T1-weighted abdominal magnetic resonance images from T9 to S1 with a 1.5-T system to measure the VAT area. These VAT areas were then summed to determine VAT volume before and after the program. Results Single-slice images at 3–11 cm above L4–L5 had significant and high correlations with VAT volume at baseline (r = 0.94–0.97). The single-slice image with the highest correlation coefficient with respect to VAT volume was located at 5 cm above L4–L5 (r = 0.97). The highest correlation coefficient between the individual changes in VAT area and changes in VAT volume was located at 6 cm above L4–L5 (r = 0.90). Conclusions Individual measurement sites have different abilities to estimate VAT volume and changes in VAT volume in obese Japanese men. Best zone located at 5–6 cm above L4–L5 may be a better predictor of VAT volume than the L4–L5 image in terms of both baseline and changes with weight loss. PMID:22698384
NASA Astrophysics Data System (ADS)
Li, Qin; Gavrielides, Marios A.; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas
2015-03-01
This work aimed to compare two different types of volume estimation methods (a model-based and a segmentationbased method) in terms of identifying factors affecting measurement uncertainty. Twenty-nine synthetic nodules with varying size, radiodensity, and shape were placed in an anthropomorphic thoracic phantom and scanned with a 16- detector row CT scanner. Ten repeat scans were acquired using three exposures and two slice collimations, and were reconstructed with varying slice thicknesses. Nodule volumes were estimated from the reconstructed data using a matched-filter and a segmentation approach. Log transformed volumes were used to obtain measurement error with truth obtained through micro-CT. ANOVA and multiple linear regression were applied to measurement error to identify significant factors affecting volume estimation for each method. Root mean square of measurement errors (RMSE) for meaningful subgroups, repeatability coefficients (RC) for different imaging protocols, and reproducibility coefficients (RDC) for thin and thick collimation conditions were evaluated. Results showed that for both methods, nodule size, shape and slice thickness were significant factors. Collimation was significant for the matched-filter method. RMSEs for matched-filter measurements were in general smaller than segmentation. To achieve RMSE on the order of 15% or less for {5, 8, 9, 10mm} nodules, the corresponding maximum allowable slice thicknesses were {3, 5, 5, 5mm} for the matched-filter and {0.8, 3, 3, 3mm} for the segmentation method. RCs showed similar patterns for both methods, increasing with slice thickness. For 8-10mm nodules, the measurements were highly repeatable provided the slice thickness was ≤3mm, regardless of method and across varying acquisition conditions. RDCs were lower for thin collimation than thick collimation protocols. While RDC of matched filter volume estimation results was always lower than segmentation results, for 8-10mm nodules with thin
Not Available
1994-09-01
The Department of Energy`s (DOE`s) planning for the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) requires characterization of the waste. This report estimates volumes, radionuclide activities, and waste forms of GTCC LLW to the year 2035. It groups the waste into four categories, representative of the type of generator or holder of the waste: Nuclear Utilities, Sealed Sources, DOE-Held, and Other Generator. GTCC LLW includes activated metals (activation hardware from reactor operation and decommissioning), process wastes (i.e., resins, filters, etc.), sealed sources, and other wastes routinely generated by users of radioactive material. Estimates reflect the possible effect that packaging and concentration averaging may have on the total volume of GTCC LLW. Possible GTCC mixed LLW is also addressed. Nuclear utilities will probably generate the largest future volume of GTCC LLW with 65--83% of the total volume. The other generators will generate 17--23% of the waste volume, while GTCC sealed sources are expected to contribute 1--12%. A legal review of DOE`s obligations indicates that the current DOE-Held wastes described in this report will not require management as GTCC LLW because of the contractual circumstances under which they were accepted for storage. This report concludes that the volume of GTCC LLW should not pose a significant management problem from a scientific or technical standpoint. The projected volume is small enough to indicate that a dedicated GTCC LLW disposal facility may not be justified. Instead, co-disposal with other waste types is being considered as an option.
Friedrich, Jan O; Beyene, Joseph; Adhikari, Neill KJ
2009-01-01
statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated. PMID:19134216
NASA Technical Reports Server (NTRS)
1990-01-01
Cost estimates for phase C/D of the laser atmospheric wind sounder (LAWS) program are presented. This information provides a framework for cost, budget, and program planning estimates for LAWS. Volume 3 is divided into three sections. Section 1 details the approach taken to produce the cost figures, including the assumptions regarding the schedule for phase C/D and the methodology and rationale for costing the various work breakdown structure (WBS) elements. Section 2 shows a breakdown of the cost by WBS element, with the cost divided in non-recurring and recurring expenditures. Note that throughout this volume the cost is given in 1990 dollars, with bottom line totals also expressed in 1988 dollars (1 dollar(88) = 0.93 1 dollar(90)). Section 3 shows a breakdown of the cost by year. The WBS and WBS dictionary are included as an attachment to this report.
Blanc, Rémi; Baylou, Pierre; Germain, Christian; Da Costa, Jean-Pierre
2010-06-01
We propose an image-based framework to evaluate the uncertainty in the estimation of the volume fraction of specific microstructures based on the observation of a single section. These microstructures consist of cubes organized on a cubic mesh, such as monocrystalline nickel base superalloys. The framework is twofold: a model-based stereological analysis allows relating two-dimensional image observations to three-dimensional microstructure features, and a spatial statistical analysis allows computing approximate confidence bounds while assessing the representativeness of the image. The reliability of the method is assessed on synthetic models. Volume fraction estimation variances and approximate confidence intervals are computed on real superalloy images in the context of material characterization. PMID:20350338
Blanc, Rémi; Baylou, Pierre; Germain, Christian; Da Costa, Jean-Pierre
2010-06-01
We propose an image-based framework to evaluate the uncertainty in the estimation of the volume fraction of specific microstructures based on the observation of a single section. These microstructures consist of cubes organized on a cubic mesh, such as monocrystalline nickel base superalloys. The framework is twofold: a model-based stereological analysis allows relating two-dimensional image observations to three-dimensional microstructure features, and a spatial statistical analysis allows computing approximate confidence bounds while assessing the representativeness of the image. The reliability of the method is assessed on synthetic models. Volume fraction estimation variances and approximate confidence intervals are computed on real superalloy images in the context of material characterization.
Båth, Magnus Svalkvist, Angelica; Söderman, Christina
2014-10-15
Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.
Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R
2016-06-01
The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available.
Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R
2016-06-01
The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available. PMID:27176640
IUS/TUG orbital operations and mission support study. Volume 5: Cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
The costing approach, methodology, and rationale utilized for generating cost data for composite IUS and space tug orbital operations are discussed. Summary cost estimates are given along with cost data initially derived for the IUS program and space tug program individually, and cost estimates for each work breakdown structure element.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
NASA Technical Reports Server (NTRS)
Chin, M. M.; Goad, C. C.; Martin, T. V.
1972-01-01
A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.
A New, Effective and Low-Cost Three-Dimensional Approach for the Estimation of Upper-Limb Volume
Buffa, Roberto; Mereu, Elena; Lussu, Paolo; Succa, Valeria; Pisanu, Tonino; Buffa, Franco; Marini, Elisabetta
2015-01-01
The aim of this research was to validate a new procedure (SkanLab) for the three-dimensional estimation of total arm volume. SkanLab is based on a single structured-light Kinect sensor (Microsoft, Redmond, WA, USA) and on Skanect (Occipital, San Francisco, CA, USA) and MeshLab (Visual Computing Lab, Pisa, Italy) software. The volume of twelve plastic cylinders was measured using geometry, as the reference, water displacement and SkanLab techniques (two raters and repetitions). The right total arm volume of thirty adults was measured by water displacement (reference) and SkanLab (two raters and repetitions). The bias and limits of agreement (LOA) between techniques were determined using the Bland–Altman method. Intra- and inter-rater reliability was assessed using the intraclass correlation coefficient (ICC) and the standard error of measurement. The bias of SkanLab in measuring the cylinders volume was −21.9 mL (−5.7%) (LOA: −62.0 to 18.2 mL; −18.1% to 6.7%) and in measuring the volume of arms’ was −9.9 mL (−0.6%) (LOA: −49.6 to 29.8 mL; −2.6% to 1.4%). SkanLab’s intra- and inter-rater reliabilities were very high (ICC >0.99). In conclusion, SkanLab is a fast, safe and low-cost method for assessing total arm volume, with high levels of accuracy and reliability. SkanLab represents a promising tool in clinical applications. PMID:26016917
A new, effective and low-cost three-dimensional approach for the estimation of upper-limb volume.
Buffa, Roberto; Mereu, Elena; Lussu, Paolo; Succa, Valeria; Pisanu, Tonino; Buffa, Franco; Marini, Elisabetta
2015-01-01
The aim of this research was to validate a new procedure (SkanLab) for the three-dimensional estimation of total arm volume. SkanLab is based on a single structured-light Kinect sensor (Microsoft, Redmond, WA, USA) and on Skanect (Occipital, San Francisco, CA, USA) and MeshLab (Visual Computing Lab, Pisa, Italy) software. The volume of twelve plastic cylinders was measured using geometry, as the reference, water displacement and SkanLab techniques (two raters and repetitions). The right total arm volume of thirty adults was measured by water displacement (reference) and SkanLab (two raters and repetitions). The bias and limits of agreement (LOA) between techniques were determined using the Bland-Altman method. Intra- and inter-rater reliability was assessed using the intraclass correlation coefficient (ICC) and the standard error of measurement. The bias of SkanLab in measuring the cylinders volume was -21.9 mL (-5.7%) (LOA: -62.0 to 18.2 mL; -18.1% to 6.7%) and in measuring the volume of arms' was -9.9 mL (-0.6%) (LOA: -49.6 to 29.8 mL; -2.6% to 1.4%). SkanLab's intra- and inter-rater reliabilities were very high (ICC >0.99). In conclusion, SkanLab is a fast, safe and low-cost method for assessing total arm volume, with high levels of accuracy and reliability. SkanLab represents a promising tool in clinical applications. PMID:26016917
NASA Astrophysics Data System (ADS)
Torres, Judith; Colonia, Daniel; Haeberli, Wilfried; Giráldez, Claudia; Frey, Holger; Huggel, Christian
2014-05-01
The glaciers in the tropical Andes of Peru have been melting at an unprecedented rate in recent years and generally after the Little Ice Age, a cold period that lasted from the 16th to the 19th century. Knowledge of glacier thicknesses and volumes is necessary for evaluating possible future scenarios of glacier shrinkage and of water supply to the Andean populations under conditions of continued warming. Calculation of glacier volumes for 19 mountain ranges in Perú has been based on two ice- thickness modeling methods including an area-related approach with different parameterizations and a slope-dependent approach. Both methods allow for rapid treatment of regional data obtained from satellite imagery and a Digital Elevation Model, integrated into a Geographic Information System. In addition, glacier outlines were obtained from the glacier inventory compiled by the Unit of Glaciology and Water Resources (UGRH) - National Water Authority (ANA) that used satellite imagery (ASTER, SPOT and LISS III from 2003 to 2010) and topographic information acquired from the cartography of the National Geographical Institute (IGN). The volume-area scaling approach resulted in glacier volume of 35.00 km3 and a total volume of 34.39 km3 resulted from the slope-dependent thickness with a thickness approximately 30 m. Estimated results also show a loss of the total ice surface ~42% and glacier volume loss about ~38% in both methods based on the first Glacier Inventory of Peru (from aerial photographs 1962 -1970) performed by HIDRANDINA SA. The results also indicate that volume estimations are subject to large uncertainties. Field measurements of glacier thickness are scarce and locally restricted due to rugged topography, high altitude and heavy crevassing of glaciers. Possibilities of calibrating and validating the applied model approaches are therefore limited. New possibilities nevertheless come into play with slope-dependent approaches, which lead beyond area-related average
A knowledge-based approach to arterial stiffness estimation using the digital volume pulse.
Jang, Dae-Geun; Farooq, Umar; Park, Seung-Hun; Goh, Choong-Won; Hahn, Minsoo
2012-08-01
We have developed a knowledge based approach for arterial stiffness estimation. The proposed new approach reliably estimates arterial stiffness based on the analysis of age and heart rate normalized reflected wave arrival time. The proposed new approach reduces cost, space, technical expertise, specialized equipment, complexity, and increases the usability compared to recently researched noninvasive arterial stiffness estimators. The proposed method consists of two main stages: pulse feature extraction and linear regression analysis. The new approach extracts the pulse features and establishes a linear prediction equation. On evaluating proposed methodology with pulse wave velocity (PWV) based arterial stiffness estimators, the proposed methodology offered the error rate of 8.36% for men and 9.52% for women, respectively. With such low error rates and increased benefits, the proposed approach could be usefully applied as low cost and effective solution for ubiquitous and home healthcare environments.
NASA Astrophysics Data System (ADS)
Levy, J. S.; Head, J. W.; Fassett, C. I.; Fountain, A. G.
2010-03-01
The morphological properties of two martian depressions suggest ice-cauldron formation. We conduct volumetric and calorimetric estimates showing that up to a cubic km of ice may have been removed in these depressions (melted and/or vaporized).
A method for estimating both the solubility parameters and molar volumes of liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1974-01-01
Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.
NASA Astrophysics Data System (ADS)
Lund, Patricia E.; Naessens, Lauren C.; Seaman, Catherine A.; Reyes, Denise A.; Ritman, Erik L.
2000-04-01
Average myocardial perfusion is remarkably consistent throughout the heart wall under resting conditions and the velocity of blood flow is fairly reproducible from artery to artery. Based on these observations, and the fact that flow through an artery is the product of arterial cross-sectional area and blood flow velocity, we would expect the volume of myocardium perfused to be proportional to the cross-sectional area of the coronary artery perfusing that volume of myocardium. This relationship has been confirmed by others in pigs, dogs and humans. To test the body size-dependence of this relationship we used the hearts from rats, 3 through 25 weeks of age. The coronary arteries were infused with radiopaque microfil polymer and the hearts scanned in a micro- CT scanner. Using these 3D images we measured the volume of myocardium and the arterial cross-sectional area of the artery that perfused that volume of myocardium. The average constant of proportionality was found to be 0.15 +/- 0.08 cm3/mm2. Our data showed no statistically different estimates of the constant of proportionality in the rat hearts of different ages nor between the left and right coronary arteries. This constant is smaller than that observed in large animals and humans, but this difference is consistent with the body mass-dependence on metabolic rate.
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.
Representative volume element to estimate buckling behavior of graphene/polymer nanocomposite
2012-01-01
The aim of the research article is to develop a representative volume element using finite elements to study the buckling stability of graphene/polymer nanocomposites. Research work exploring the full potential of graphene as filler for nanocomposites is limited in part due to the complex processes associated with the mixing of graphene in polymer. To overcome some of these issues, a multiscale modeling technique has been proposed in this numerical work. Graphene was herein modeled in the atomistic scale, whereas the polymer deformation was analyzed as a continuum. Separate representative volume element models were developed for investigating buckling in neat polymer and graphene/polymer nanocomposites. Significant improvements in buckling strength were observed under applied compressive loading when compared with the buckling stability of neat polymer. PMID:22994951
Estimation of spilled hydrocarbon volume--the state-of-the-art.
Saleemi, M; Al-Suwaiyan, M S; Aiban, S A; Ishaq, A M; Al-Malacks, M H; Hussain, M
2004-09-01
With the increase in the environmental awareness and reorganization of the need for its protection, the study of soil and groundwater contamination and its remediation have become the focus of numerous researchers. Intentional and unintentional release of hydrocarbon into soil and subsurface pose a great threat to the biosphere environment. The quantification of spilled volume has primary importance for carrying out the remediation work and is considered a first step in the remediation hierarchy. Different investigators approached the problem from many viewpoints, and the resulting achievements are so extensive and scattered that it seems essential to inventory the completed works. This paper presents a systematic study of the available experimental and theoretical works. A complete picture of the present status of the problem is also provided. Issues that remain unresolved and obscure by the current day investigators are pointed out to facilitate future research directions and more comprehensive analyses of the quantification of spilled hydrocarbon volumes.
Xie, Wen-Jia; Wu, Xiao; Xue, Ren-Liang; Lin, Xiang-Ying; Kidd, Elizabeth A.; Yan, Shu-Mei; Zhang, Yao-Hong; Zhai, Tian-Tian; Lu, Jia-Yang; Wu, Li-Li; Zhang, Hao; Huang, Hai-Hua; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
Purpose: To more accurately define clinical target volume for cervical cancer radiation treatment planning by evaluating tumor microscopic extension toward the uterus body (METU) in International Federation of Gynecology and Obstetrics stage Ib-IIa squamous cell carcinoma of the cervix (SCCC). Patients and Methods: In this multicenter study, surgical resection specimens from 318 cases of stage Ib-IIa SCCC that underwent radical hysterectomy were included. Patients who had undergone preoperative chemotherapy, radiation, or both were excluded from this study. Microscopic extension of primary tumor toward the uterus body was measured. The association between other pathologic factors and METU was analyzed. Results: Microscopic extension toward the uterus body was not common, with only 12.3% of patients (39 of 318) demonstrating METU. The mean (±SD) distance of METU was 0.32 ± 1.079 mm (range, 0-10 mm). Lymphovascular space invasion was associated with METU distance and occurrence rate. A margin of 5 mm added to gross tumor would adequately cover 99.4% and 99% of the METU in the whole group and in patients with lymphovascular space invasion, respectively. Conclusion: According to our analysis of 318 SCCC specimens for METU, using a 5-mm gross tumor volume to clinical target volume margin in the direction of the uterus should be adequate for International Federation of Gynecology and Obstetrics stage Ib-IIa SCCC. Considering the discrepancy between imaging and pathologic methods in determining gross tumor volume extent, we recommend a safer 10-mm margin in the uterine direction as the standard for clinical practice when using MRI for contouring tumor volume.
NASA Astrophysics Data System (ADS)
Rebello, N. Sanjay
2012-02-01
Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.
Tug fleet and ground operations schedules and controls. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
Cost data for the tug DDT&E and operations phases are presented. Option 6 is the recommended option selected from seven options considered and was used as the basis for ground processing estimates. Option 6 provides for processing the tug in a factory clean environment in the low bay area of VAB with subsequent cleaning to visibly clean. The basis and results of the trade study to select Option 6 processing plan is included. Cost estimating methodology, a work breakdown structure, and a dictionary of WBS definitions is also provided.
Glacier Volume Change Estimation Using Time Series of Improved Aster Dems
NASA Astrophysics Data System (ADS)
Girod, Luc; Nuth, Christopher; Kääb, Andreas
2016-06-01
Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be
White matter atlas of the human spinal cord with estimation of partial volume effect.
Lévy, S; Benhamou, M; Naaman, C; Rainville, P; Callot, V; Cohen-Adad, J
2015-10-01
Template-based analysis has proven to be an efficient, objective and reproducible way of extracting relevant information from multi-parametric MRI data. Using common atlases, it is possible to quantify MRI metrics within specific regions without the need for manual segmentation. This method is therefore free from user-bias and amenable to group studies. While template-based analysis is common procedure for the brain, there is currently no atlas of the white matter (WM) spinal pathways. The goals of this study were: (i) to create an atlas of the white matter tracts compatible with the MNI-Poly-AMU template and (ii) to propose methods to quantify metrics within the atlas that account for partial volume effect. The WM atlas was generated by: (i) digitalizing an existing WM atlas from a well-known source (Gray's Anatomy), (ii) registering this atlas to the MNI-Poly-AMU template at the corresponding slice (C4 vertebral level), (iii) propagating the atlas throughout all slices of the template (C1 to T6) using regularized diffeomorphic transformations and (iv) computing partial volume values for each voxel and each tract. Several approaches were implemented and validated to quantify metrics within the atlas, including weighted-average and Gaussian mixture models. Proof-of-concept application was done in five subjects for quantifying magnetization transfer ratio (MTR) in each tract of the atlas. The resulting WM atlas showed consistent topological organization and smooth transitions along the rostro-caudal axis. The median MTR across tracts was 26.2. Significant differences were detected across tracts, vertebral levels and subjects, but not across laterality (right-left). Among the different tested approaches to extract metrics, the maximum a posteriori showed highest performance with respect to noise, inter-tract variability, tract size and partial volume effect. This new WM atlas of the human spinal cord overcomes the biases associated with manual delineation and partial
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. A user oriented description of the program input requirements, program output, deck setup, and operating instructions is presented.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
VanTeeffelen, Jurgen W G E; Brands, Judith; Janssen, Ben J A; Vink, Hans
2013-05-01
The endothelial glycocalyx forms a hyaluronan-containing interface between the flowing blood and the endothelium throughout the body. By comparing the systemic distribution of a small glycocalyx-accessible tracer vs. a large circulating plasma tracer, the size-selective barrier properties of the glycocalyx have recently been utilized to estimate whole body glycocalyx volumes in humans and animals, but a comprehensive validation of this approach has been lacking at the moment. In the present study, we compared, in anesthetized, ventilated C57Bl/6 mice, the whole body distribution of small (40 kDa) dextrans (Texas Red labeled; Dex40) vs. that of intermediate (70 kDa) and large (500 kDa) dextrans (both FITC labeled; Dex70 and Dex500, respectively) using tracer dilution and vs. that of circulating plasma, as derived from the dilution of fluorescein-labeled red blood cells and large-vessel hematocrit. The contribution of the glycocalyx was evaluated by intravenous infusion of a bolus of the enzyme hyaluronidase. In saline-treated control mice, distribution volume (in ml) differed between tracers (P < 0.05; ANOVA) in the following order: Dex40 (0.97 ± 0.04) > Dex70 (0.90 ± 0.04) > Dex500 (0.81 ± 0.10) > plasma (0.71 ± 0.02), resulting in an inaccessible vascular volume, i.e., compared with the distribution volume of Dex40, of 0.03 ± 0.01, 0.15 ± 0.04, and 0.31 ± 0.05 ml for Dex70, Dex500, and plasma, respectively. In hyaluronidase-treated mice, Dex70 and Dex40 volumes were not different from each other, and inaccessible vascular volumes for Dex500 (0.03 ± 0.03) and plasma (0.14 ± 0.05) were smaller (P < 0.05) than those in control animals. Clearance of Dex70 and Dex500 from the circulation was enhanced (P < 0.05) in hyaluronidase-treated vs. control mice. These results indicate that the glycocalyx contributes to size-dependent differences in whole body vascular distribution of plasma solutes in mice. Whole body vascular volume measurements based on the
VanTeeffelen, Jurgen W G E; Brands, Judith; Janssen, Ben J A; Vink, Hans
2013-05-01
The endothelial glycocalyx forms a hyaluronan-containing interface between the flowing blood and the endothelium throughout the body. By comparing the systemic distribution of a small glycocalyx-accessible tracer vs. a large circulating plasma tracer, the size-selective barrier properties of the glycocalyx have recently been utilized to estimate whole body glycocalyx volumes in humans and animals, but a comprehensive validation of this approach has been lacking at the moment. In the present study, we compared, in anesthetized, ventilated C57Bl/6 mice, the whole body distribution of small (40 kDa) dextrans (Texas Red labeled; Dex40) vs. that of intermediate (70 kDa) and large (500 kDa) dextrans (both FITC labeled; Dex70 and Dex500, respectively) using tracer dilution and vs. that of circulating plasma, as derived from the dilution of fluorescein-labeled red blood cells and large-vessel hematocrit. The contribution of the glycocalyx was evaluated by intravenous infusion of a bolus of the enzyme hyaluronidase. In saline-treated control mice, distribution volume (in ml) differed between tracers (P < 0.05; ANOVA) in the following order: Dex40 (0.97 ± 0.04) > Dex70 (0.90 ± 0.04) > Dex500 (0.81 ± 0.10) > plasma (0.71 ± 0.02), resulting in an inaccessible vascular volume, i.e., compared with the distribution volume of Dex40, of 0.03 ± 0.01, 0.15 ± 0.04, and 0.31 ± 0.05 ml for Dex70, Dex500, and plasma, respectively. In hyaluronidase-treated mice, Dex70 and Dex40 volumes were not different from each other, and inaccessible vascular volumes for Dex500 (0.03 ± 0.03) and plasma (0.14 ± 0.05) were smaller (P < 0.05) than those in control animals. Clearance of Dex70 and Dex500 from the circulation was enhanced (P < 0.05) in hyaluronidase-treated vs. control mice. These results indicate that the glycocalyx contributes to size-dependent differences in whole body vascular distribution of plasma solutes in mice. Whole body vascular volume measurements based on the
NASA Astrophysics Data System (ADS)
Pandey, Apoorva; Venkataraman, Chandra
2014-12-01
Urbanization and rising household incomes in India have led to growing transport demand, particularly during 1990-2010. Emissions from transportation have been implicated in air quality and climate effects. In this work, emissions of particulate matter (PM2.5 or mass concentration of particles smaller than 2.5 um diameter), black carbon (BC) and organic carbon (OC), were estimated from the transport sector in India, using detailed technology divisions and regionally measured emission factors. Modes of transport addressed in this work include road transport, railways, shipping and aviation, but exclude off-road equipment like diesel machinery and tractors. For road transport, a vehicle fleet model was used, with parameters derived from vehicle sales, registration data, and surveyed age-profile. The fraction of extremely high emitting vehicles, or superemitters, which is highly uncertain, was assumed as 20%. Annual vehicle utilization estimates were based on regional surveys and user population. For railways, shipping and aviation, a top-down approach was applied, using nationally reported fuel consumption. Fuel use and emissions from on-road vehicles were disaggregated at the state level, with separate estimates for 30 cities in India. The on-road fleet was dominated by two-wheelers, followed by four-and three-wheelers, with new vehicles comprising the majority of the fleet for each vehicle type. A total of 276 (-156, 270) Gg/y PM2.5, 144 (-99, 207) Gg/y BC, and 95 (-64, 130) Gg/y OC emissions were estimated, with over 97% contribution from on-road transport. Largest emitters were identified as heavy duty diesel vehicles for PM2.5 and BC, but two-stroke vehicles and superemitters for OC. Old vehicles (pre-2005) contributed significantly more (∼70%) emissions, while their share in the vehicle fleet was smaller (∼45%). Emission estimates were sensitive to assumed superemitter fraction. Improvement of emission estimates requires on-road emission factor measurements
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
NASA Technical Reports Server (NTRS)
Martin, T. V.; Mullins, N. E.
1972-01-01
The operating and set-up procedures for the multi-satellite, multi-arc GEODYN- Orbit Determination program are described. All system output is analyzed. The GEODYN Program is the nucleus of the entire GEODYN system. It is a definitive orbit and geodetic parameter estimation program capable of simultaneously processing observations from multiple arcs of multiple satellites. GEODYN has two modes of operation: (1) the data reduction mode and (2) the orbit generation mode.
Xu, Ming; Lei, Zhipeng; Yang, James
2015-01-01
N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85.
Xu, Ming; Lei, Zhipeng; Yang, James
2015-01-01
N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85. PMID:25800663
NASA Astrophysics Data System (ADS)
Gusyev, Maksym; Yamazaki, Yusuke; Morgenstern, Uwe; Stewart, Mike; Kashiwaya, Kazuhisa; Hirai, Yasuyuki; Kuribayashi, Daisuke; Sawano, Hisaya
2015-04-01
The goal of this study is to estimate subsurface water transit times and volumes in headwater catchments of Hokkaido, Japan, using the New Zealand high-accuracy tritium analysis technique. Transit time provides insights into the subsurface water storage and therefore provides a robust and quick approach to quantifying the subsurface groundwater volume. Our method is based on tritium measurements in river water. Tritium is a component of meteoric water, decays with a half-life of 12.32 years, and is inert in the subsurface after the water enters the groundwater system. Therefore, tritium is ideally suited for characterization of the catchment's responses and can provide information on mean water transit times up to 200 years. Only in recent years has it become possible to use tritium for dating of stream and river water, due to the fading impact of the bomb-tritium from thermo-nuclear weapons testing, and due to improved measurement accuracy for the extremely low natural tritium concentrations. Transit time of the water discharge is one of the most crucial parameters for understanding the response of catchments and estimating subsurface water volume. While many tritium transit time studies have been conducted in New Zealand, only a limited number of tritium studies have been conducted in Japan. In addition, the meteorological, orographic and geological conditions of Hokkaido Island are similar to those in parts of New Zealand, allowing for comparison between these regions. In 2014, three field trips were conducted in Hokkaido in June, July and October to sample river water at river gauging stations operated by the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). These stations have altitudes between 36 m and 860 m MSL and drainage areas between 45 and 377 km2. Each sampled point is located upstream of MLIT dams, with hourly measurements of precipitation and river water levels enabling us to distinguish between the snow melt and baseflow contributions
A robust method to estimate the intracranial volume across MRI field strengths (1.5T and 3T).
Keihaninejad, Shiva; Heckemann, Rolf A; Fagiolo, Gianlorenzo; Symms, Mark R; Hajnal, Joseph V; Hammers, Alexander
2010-05-01
As population-based studies may obtain images from scanners with different field strengths, a method to normalize regional brain volumes according to intracranial volume (ICV) independent of field strength is needed. We found systematic differences in ICV estimation, tested in a cohort of healthy subjects (n=5) that had been imaged using 1.5T and 3T scanners, and confirmed in two independent cohorts. This was related to systematic differences in the intensity of cerebrospinal fluid (CSF), with higher intensities for CSF located in the ventricles compared with CSF in the cisterns, at 3T versus 1.5T, which could not be removed with three different applied bias correction algorithms. We developed a method based on tissue probability maps in MNI (Montreal Neurological Institute) space and reverse normalization (reverse brain mask, RBM) and validated it against manual ICV measurements. We also compared it with alternative automated ICV estimation methods based on Statistical Parametric Mapping (SPM5) and Brain Extraction Tool (FSL). The proposed RBM method was equivalent to manual ICV normalization with a high intraclass correlation coefficient (ICC=0.99) and reliable across different field strengths. RBM achieved the best combination of precision and reliability in a group of healthy subjects, a group of patients with Alzheimer's disease (AD) and mild cognitive impairment (MCI) and can be used as a common normalization framework.
NASA Astrophysics Data System (ADS)
Valori, Gherardo; Pariat, Etienne; Anfinogentov, Sergey; Chen, Feng; Georgoulis, Manolis K.; Guo, Yang; Liu, Yang; Moraitis, Kostas; Thalmann, Julia K.; Yang, Shangbin
2016-10-01
Magnetic helicity is a conserved quantity of ideal magneto-hydrodynamics characterized by an inverse turbulent cascade. Accordingly, it is often invoked as one of the basic physical quantities driving the generation and structuring of magnetic fields in a variety of astrophysical and laboratory plasmas. We provide here the first systematic comparison of six existing methods for the estimation of the helicity of magnetic fields known in a finite volume. All such methods are reviewed, benchmarked, and compared with each other, and specifically tested for accuracy and sensitivity to errors. To that purpose, we consider four groups of numerical tests, ranging from solutions of the three-dimensional, force-free equilibrium, to magneto-hydrodynamical numerical simulations. Almost all methods are found to produce the same value of magnetic helicity within few percent in all tests. In the more solar-relevant and realistic of the tests employed here, the simulation of an eruptive flux rope, the spread in the computed values obtained by all but one method is only 3 %, indicating the reliability and mutual consistency of such methods in appropriate parameter ranges. However, methods show differences in the sensitivity to numerical resolution and to errors in the solenoidal property of the input fields. In addition to finite volume methods, we also briefly discuss a method that estimates helicity from the field lines' twist, and one that exploits the field's value at one boundary and a coronal minimal connectivity instead of a pre-defined three-dimensional magnetic-field solution.
Kjelstrom, L.C.; Berenbrock, C.
1996-12-31
The purpose of this report is to provide estimates of the 100-year peak flows and flow volumes that could enter the INEL area from the Big Lost River and Brich Creek are needed as input data for models that will be used to delineate the extent of the 100-year flood plain at the INEL. The methods, procedures and assumptions used to estimate the 100-year peak flows and flow volumes are described in this report.
Noninvasive measurement of human ascending colon volume.
Badley, A D; Camilleri, M; O'Connor, M K
1993-06-01
The capacitance and motor functions of the colon are important determinants of its overall function. A simple, noninvasive method to quantify regional colonic volume is required for further physiologic and pharmacologic studies. Our aim was to determine whether measurements of human ascending colon (AC) volume using two-dimensional (2-D) images are as accurate as estimates using three-dimensional (3-D) images. Five healthy male volunteers each ingested a methacrylate-coated capsule containing 99Tcm-labelled Amberlite pellets. Two-and 3-D images were obtained using a gamma camera with single photon emission computed tomography (SPECT) capability. Ascending colon volume was estimated by a variable region of interest (VROI) program and by full-width half-maximum (FWHM) analysis, and results were compared to the volume estimates by SPECT. Full-width half-maximum analysis yielded volume estimates that were not significantly different from SPECT (slope = 1.093; t = 0.51; P > 0.5), whereas VROI estimates were significantly different from volume measurements by SPECT and, hence, considered less accurate (slope = 0.438; t = 4.93; P < 0.02). Thus, the less expensive and more easily available planar imaging technique with analysis by FWHM estimates AC volume as accurately as SPECT.
Orbital Spacecraft Consumables Resupply System (OSCRS). Volume 3: Program Cost Estimate
NASA Technical Reports Server (NTRS)
Perry, D. L.
1986-01-01
A cost analysis for the design, development, qualification, and production of the monopropellant and bipropellant Orbital Spacecraft Consumable Resupply System (OSCRS) tankers, their associated avionics located in the Orbiter payload bay, and the unique ground support equipment (GSE) and airborne support equipment (ASE) required to support operations is presented. Monopropellant resupply for the Gamma Ray Observatory (GRO) in calendar year 1991 is the first defined resupply mission with bipropellant resupply missions expected in the early to mid 1990's. The monopropellant program estimate also includes contractor costs associated with operations support through the first GRO resupply mission.
Budget estimates: Fiscal year 1994. Volume 3: Research and program management
NASA Technical Reports Server (NTRS)
1994-01-01
The research and program management (R&PM) appropriation provides the salaries, other personnel and related costs, and travel support for NASA's civil service workforce. This FY 1994 budget funds costs associated with 23,623 full-time equivalent (FTE) work years. Budget estimates are provided for all NASA centers by categories such as space station and new technology investments, space flight programs, space science, life and microgravity sciences, advanced concepts and technology, center management and operations support, launch services, mission to planet earth, tracking and data programs, aeronautical research and technology, and safety, reliability, and quality assurance.
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.
Role of cardiac CTA in estimating left ventricular volumes and ejection fraction
Singh, Robin Man; Singh, Balkrishna Man; Mehta, Jawahar Lal
2014-01-01
Left ventricular ejection fraction (LVEF) is an important predictor of cardiac outcome and helps in making important diagnostic and therapeutic decisions such as the treatment of different types of congestive heart failure or implantation of devices like cardiac resynchronization therapy-defibrillator. LVEF can be measured by various techniques such as transthoracic echocardiography, contrast ventriculography, radionuclide techniques, cardiac magnetic resonance imaging and cardiac computed tomographic angiography (CTA). The development of cardiac CTA using multi-detector row CT (MDCT) has seen a very rapid improvement in the technology for identifying coronary artery stenosis and coronary artery disease in the last decade. During the acquisition, processing and analysis of data to study coronary anatomy, MDCT provides a unique opportunity to measure left ventricular volumes and LVEF simultaneously with the same data set without the need for additional contrast or radiation exposure. The development of semi-automated and automated software to measure LVEF has now added uniformity, efficiency and reproducibility of practical value in clinical practice rather than just being a research tool. This article will address the feasibility, the accuracy and the limitations of MDCT in measuring LVEF. PMID:25276310
Withers, R T; Borkent, M; Ball, C T
1990-10-01
The aim of this study was to use the measured residual volume (RV) of male athletes (n = 207) as a criterion and assess the error in their RV, body density (BD) and relative body fat (%BF) associated with using RVs predicted from regression equations, RVs estimated from vital capacity (VC) and an assumed constant RV of 1300 ml. The ventilated residual volume (RV) was determined both before and after the underwater weighing by helium dilution with the subject immersed to neck level. The mean of the absolute differences Idl and SEE between the 2 RV trials were 66 and 89 ml, respectively. These increased to values ranging 195-747 and 259-308 ml, respectively, when the means of the 2 RV trials for each subject were compared with the RVs predicted via regression equations, estimated from the VC and assumed to be a constant of 1300 ml. A similar trend emerged with variation of only the RV in the BD formula for each subject. The 2 RV trials resulted in a Idl and SEE of .00109 (.5% BF) and .00145 g.cm-3 (.6% BF), respectively, but these increased to values ranging .00306 (1.3% BF)-.01207 (5.1% BF) and .00394 (1.7% BF)-.00441 g.cm-3 (1.9% BF), respectively, for predicted, estimated and assumed constant RVs. In all cases the lowest Idl and SEE were associated with the RVs predicted by a multiple regression equation (R = .616; SEE = 259 ml) which was generated on our sample.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Technical Reports Server (NTRS)
Gates, W. R.
1983-01-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. The fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R&D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), dependng on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest.
NASA Astrophysics Data System (ADS)
Gates, W. R.
1983-02-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. The fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R&D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), dependng on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest.
Sidle, John E.; Wamalwa, Emmanuel S.; Okumu, Thomas O.; Bryant, Kendall L.; Goulet, Joseph L.; Maisto, Stephen A.; Braithwaite, R. Scott; Justice, Amy C.
2010-01-01
Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang’aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang’aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang’aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang’aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard. PMID:19015972
Estimating the Cold War mortgage: The 1995 baseline environmental management report. Volume 1
1995-03-01
This is the first annual report on the activities and potentials costs required to address the waste, contamination, and surplus nuclear facilities that are the responsibility of the Department of Energy`s Environmental Management program. The Department`s Office of Environmental Management, established in 1989, manages one of the largest environmental programs in the world--with more than 130 sites and facilities in over 30 States and territories. The primary focus of the program is to reduce health and safety risks from radioactive waste and contamination resulting from the production, development, and testing of nuclear weapons. The program also is responsible for the environmental legacy from, and ongoing waste management for, nuclear energy research and development, and basic science research. In an attempt to better oversee this effort, Congress required the Secretary of Energy to submit a Baseline Environmental Management Report with annual updates. The 1995 Baseline Environmental Management Report provides life-cycle cost estimates, tentative schedules, and projected activities necessary to complete the Environmental Management program.
Forest Volume and Biomass estimation from SAR/LIDAR/Optical Fusion in Chile
NASA Astrophysics Data System (ADS)
Kellndorfer, J. M.; Walker, W. S.; Goetz, S. J.; Cormier, T.; Kirsch, K.; Gonzalez, S.; Rombach, M.
2009-12-01
The paper reports on research to investigate ALOS/PALSAR L-band radar and optical time series data in conjunction with airborne lidar datasets to develop advanced data fusion algorithms for biomass and ecosystem structure measurements in support of the NASA DESDynI mission. The research is based on the acquisition of ALOS/PALSAR time series data beginning in 2007 and the timely confluence of these acquisitions with other highly relevant remote sensing and ground reference data sets in forested areas in Chile. Through collaboration with Digimapas Chile, the project has access to 75,000 km2 of 1-meter resolution full-waveform small footprint lidar (SFPL) data and 0.5 m resolution digital orthophoto imagery covering the commercial forests of Arauco, one of the largest cellulose producers in Latin America. Field inventory data from Arauco are used to test terrain and environmental influences on biomass estimation from empirical regression tree based data fusion approaches. The SAR data acquisitions available from PALSAR during the project time frame will span a five year period from 2007 to 2011, allowing investigations into how L-band time series data, similar to that expected from the DESDynI SAR (backscatter and interferometric coherence), can be used to build (1) the DESDynI biomass map product to be produced at the end of the “designed mission life” (i.e., 3 and/or 5/5+ years) and (2) annual maps of aboveground biomass change.
NASA Astrophysics Data System (ADS)
Sule, Abdul Rahaman
This study aims to develop models predicting the depth, extent and volume of flooding in the Hadejia-Nguru wetlands of Nigeria from satellite sensor images. In subregions representative of the entire wetlands over 1,500 rigorously coordinated and geocoded depths were observed, simultaneously and near-simultaneously, with Landsat-5 satellite overpass of 2 September 1990. Over 1,000 of the measured depths were reduced to water levels of 2 September 1990 and accurately calibrated to corresponding pixels on the Landsat-TM image of the same date. Depth-radiance power-curve relationships were established using regression analysis based partly on relationships from ground radiometry. An operational model for flood prediction was successfully developed. Area, volume and average depth of flooding predicted from Landsat-TM satellite sensor data were, respectively, 1186km2, 560.92 million m3 and 0.66m on 26 September 1986; and 910km2, 430.79 million m3 and 0.66m on 2 September 1990. Mean water-depths were predicted in open waters from the Landsat-TM imagery with a confidence interval of between 9-20% at depths of 0.10- 6.20m; and in inundated vegetation to 12-14% at 0.25-0.75m depths (and 30-50% at depths less than 0.25m or greater than 0.75m). Applying the developed depth-radiance equations to NOAA-11 AVHRR satellite sensor data of 24 August 1990, they overestimated flood extent by 6% (52 km2) and underestimated volume by 18% (78.8 106 m3). Simulated Meteosat satellite data overestimated flood extent by over 6% (55 km2) and underestimated volume by 37% (160.2 106 m3). Using imagery from 1986, 1987, 1990 and 1991 the frequency of flooding was found to vary spatially in about 50% of the wetlands every 4 or 6 years, with only about 5.6% (242.1 km2) and 2.2% (94.5 km2) of the region flooded 3 times and 4 times, respectively. This means that conventional techniques alone cannot be used to adequately monitor flooding in the wetlands. Operational problems encountered with using
NASA Astrophysics Data System (ADS)
Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.
2016-01-01
Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.
Gamble, C.R.
1989-01-01
A dimensionless hydrograph developed for a variety of basin conditions in Georgia was tested for its applicability to streams in East and West Tennessee by comparing it to a similar dimensionless hydrograph developed for streams in East and West Tennessee. Hydrographs of observed discharge at 83 streams in East Tennessee and 38 in West Tennessee were used in the study. Statistical analyses were performed by comparing simulated (or computed) hydrographs, derived by application of the Georgia dimensionless hydrograph, and dimensionless hydrographs developed from Tennessee data, with the observed hydrographs at 50 and 75% of their peak-flow widths. Results of the tests indicate that the Georgia dimensionless hydrography is virtually the same as the one developed for streams in East Tennessee, but that it is different from the dimensionless hydrograph developed for streams in West Tennessee. Because of the extensive testing of the Georgia dimensionless hydrograph, it was determined to be applicable for East Tennessee, whereas the dimensionless hydrograph developed from data on streams in West Tennessee was determined to be applicable in West Tennessee. As part of the dimensionless hydrograph development, an average lagtime in hours for each study basin, and the volume in inches of flood runoff for each flood event were computed. By use of multiple-regression analysis, equations were developed that relate basin lagtime to drainage area size, basin length, and percent impervious area. Similarly, flood volumes were related to drainage area size, peak discharge, and basin lagtime. These equations, along with the appropriate dimensionless hydrograph, can be used to estimate a typical (average) flood hydrograph and volume for recurrence-intervals up to 100 years at any ungaged site draining less than 50 sq mi in East and West Tennessee. (USGS)
NASA Astrophysics Data System (ADS)
He, Bin; Frey, Eric C.
2010-06-01
Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were
He, Bin; Frey, Eric C
2010-06-21
Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed (111)In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were
Hogrel, Jean-Yves; Barnouin, Yoann; Azzabou, Noura; Butler-Browne, Gillian; Voit, Thomas; Moraux, Amélie; Leroux, Gaëlle; Behin, Anthony; McPhee, Jamie S; Carlier, Pierre G
2015-06-01
Muscle mass is particularly relevant to follow during aging, owing to its link with physical performance and autonomy. The objectives of this work were to assess muscle volume (MV) and intramuscular fat (IMF) for all the muscles of the thigh in a large population of young and elderly healthy individuals using magnetic resonance imaging (MRI) to test the effect of gender and age on MV and IMF and to determine the best representative slice for the estimation of MV and IMF. The study enrolled 105 healthy young (range 20-30 years) and older (range 70-80 years) subjects. MRI scans were acquired along the femur length using a three-dimension three-point Dixon proton density-weighted gradient echo sequence. MV and IMF were estimated from all the slices. The effects of age and gender on MV and IMF were assessed. Predictive equations for MV and IMF were established using a single slice at various femur levels for each muscle in order to reduce the analysis process. MV was decreased with aging in both genders, particularly in the quadriceps femoris. IMF was largely increased with aging in men and, to a lesser extent, in women. Percentages of MV decrease and IMF increase with aging varied according to the muscle. Predictive equations to predict MV and IMF from single slices are provided and were validated. This study is the first one to provide muscle volume and intramuscular fat infiltration in all the muscles of the thigh in a large population of young and elderly healthy subjects. PMID:26040416
Wong, Angelita Pui-Yee; Pipitone, Jon; Park, Min Tae M; Dickie, Erin W; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Chakravarty, M Mallar; Pausova, Zdenka; Paus, Tomáš
2014-07-01
The pituitary gland is a key structure in the hypothalamic-pituitary-gonadal (HPG) axis--it plays an important role in sexual maturation during puberty. Despite its small size, its volume can be quantified using magnetic resonance imaging (MRI). Here, we study a cohort of 962 typically developing adolescents from the Saguenay Youth Study and estimate pituitary volumes using a newly developed multi-atlas segmentation method known as the MAGeT Brain algorithm. We found that age and puberty stage (controlled for age) each predicts adjusted pituitary volumes (controlled for total brain volume) in both males and females. Controlling for the effects of age and puberty stage, total testosterone and estradiol levels also predict adjusted pituitary volumes in males and pre-menarche females, respectively. These findings demonstrate that the pituitary gland grows during adolescence, and its volume relates to circulating plasma-levels of sex steroids in both males and females.
NASA Astrophysics Data System (ADS)
Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.
2015-01-01
The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods
Maeder, M T; Muenzer, T; Rickli, H; Brunner-La Rocca, H P; Myers, J; Ammann, P
2008-08-01
Maximal exercise capacity expressed as metabolic equivalents (METs) is rarely directly measured (measured METs; mMETs) but estimated from maximal workload (estimated METs; eMETs). We assessed the accuracy of predicting mMETs by eMETs in asymptomatic subjects. Thirty-four healthy volunteers without cardiovascular risk factors (controls) and 90 patients with at least one risk factor underwent cardiopulmonary exercise testing using individualized treadmill ramp protocols. The equation of the American College of Sports Medicine (ACSM) was employed to calculate eMETs. Despite a close correlation between eMETs and mMETs (patients: r = 0.82, controls: r = 0.88; p < 0.001 for both), eMETs were higher than mMETs in both patients [11.7 (8.9 - 13.4) vs. 8.2 (7.0 - 10.6) METs; p < 0.001] and controls [17.0 (16.2 - 18.2) vs. 15.6 (14.2 - 17.0) METs; p < 0.001]. The absolute [2.5 (1.6 - 3.7) vs. 1.3 (0.9 - 2.1) METs; p < 0.001] and the relative [28 (19 - 47) vs. 9 (6 - 14) %; p < 0.001] difference between eMETs and mMETs was higher in patients. In patients, ratio limits of agreement of 1.33 (*/ divided by 1.40) between eMETs and mMETs were obtained, whereas the ratio limits of agreement were 1.09 (*/ divided by 1.13) in controls. The ACSM equation is associated with a significant overestimation of mMETs in young and fit subjects, which is markedly more pronounced in older and less fit subjects with cardiovascular risk factors.
Hayward, R.K.
2011-01-01
The Mars Global Digital Dune Database (MGD3) now extends from 90??N to 65??S. The recently released north polar portion (MC-1) of MGD3 adds ~844 000km2 of moderate- to large-size dark dunes to the previously released equatorial portion (MC-2 to MC-29) of the database. The database, available in GIS- and tabular-format in USGS Open-File Reports, makes it possible to examine global dune distribution patterns and to compare dunes with other global data sets (e.g. atmospheric models). MGD3 can also be used by researchers to identify areas suitable for more focused studies. The utility of MGD3 is demonstrated through three example applications. First, the uneven geographic distribution of the dunes is discussed and described. Second, dune-derived wind direction and its role as ground truth for atmospheric models is reviewed. Comparisons between dune-derived winds and global and mesoscale atmospheric models suggest that local topography may have an important influence on dune-forming winds. Third, the methods used here to estimate north polar dune volume are presented and these methods and estimates (1130km3 to 3250km3) are compared with those of previous researchers (1158km3 to 15 000km3). In the near future, MGD3 will be extended to include the south polar region. ?? 2011 by John Wiley and Sons, Ltd.
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
NASA Astrophysics Data System (ADS)
Ai, Yu-hua; Zhou, Huai-chun
2005-02-01
For visualizing non-uniform absorbing, emitting, non-scattering, axisymmetric sooting flames, because conventional two-color emission methods are no longer suitable, a three-color emission method for the simultaneous estimation of temperature and soot volume fraction distributions in these flames is studied in this paper. The spectral radiation intensities at wavelengths of red, green, and blue, which may be derived from color flame images, are simulated for the inverse analysis. Then the simultaneous estimation is carried out from the spectral radiation intensities by using a Newton-type iteration algorithm and the least-squares method. In this method, a factor is used to balance the wide variation of spectral radiation intensities due to both the wide ranges of temperature and wavelength of the flame radiation. The results indicate that the three-color method is suited for the reconstruction of flame structures with single or double peaks with small difference between the peak and valley. For a double-peaked flame structure with larger peak and valley difference, reasonable result can be obtained just when the mean square deviations of measurement data are small, for example, not more than 0.01.
Tang, Robert Y.; McDonald, Nancy Laamanen, Curtis; LeClair, Robert J.
2014-11-01
Purpose: To develop a method to estimate the mean fractional volume of fat (ν{sup ¯}{sub fat}) within a region of interest (ROI) of a tissue sample for wide-angle x-ray scatter (WAXS) applications. A scatter signal from the ROI was obtained and use of ν{sup ¯}{sub fat} in a WAXS fat subtraction model provided a way to estimate the differential linear scattering coefficient μ{sub s} of the remaining fatless tissue. Methods: The efficacy of the method was tested using animal tissue from a local butcher shop. Formalin fixed samples, 5 mm in diameter 4 mm thick, were prepared. The two main tissue types were fat and meat (fibrous). Pure as well as composite samples consisting of a mixture of the two tissue types were analyzed. For the latter samples, ν{sub fat} for the tissue columns of interest were extracted from corresponding pixels in CCD digital x-ray images using a calibration curve. The means ν{sup ¯}{sub fat} were then calculated for use in a WAXS fat subtraction model. For the WAXS measurements, the samples were interrogated with a 2.7 mm diameter 50 kV beam and the 6° scattered photons were detected with a CdTe detector subtending a solid angle of 7.75 × 10{sup −5} sr. Using the scatter spectrum, an estimate of the incident spectrum, and a scatter model, μ{sub s} was determined for the tissue in the ROI. For the composite samples, a WAXS fat subtraction model was used to estimate the μ{sub s} of the fibrous tissue in the ROI. This signal was compared to μ{sub s} of fibrous tissue obtained using a pure fibrous sample. Results: For chicken and beef composites, ν{sup ¯}{sub fat}=0.33±0.05 and 0.32 ± 0.05, respectively. The subtractions of these fat components from the WAXS composite signals provided estimates of μ{sub s} for chicken and beef fibrous tissue. The differences between the estimates and μ{sub s} of fibrous obtained with a pure sample were calculated as a function of the momentum transfer x. A t-test showed that the mean of the
NASA Astrophysics Data System (ADS)
Worthington, Paul F.
2010-05-01
Reservoirs that contain dispersed clay minerals traditionally have been evaluated petrophysically using either the effective or the total porosity system. The major weakness of the former is its reliance on "shale" volume fraction ( Vsh) as a clay-mineral indicator in the determination of effective porosity from well logs. Downhole clay-mineral indicators have usually delivered overestimates of fractional clay-mineral volume ( Vcm) because they use as a reference nearby shale beds that are often assumed to comprise clay minerals exclusively, whereas those beds also include quartzitic silts and other detritus. For this reason, effective porosity is often underestimated significantly, and this shortfall transmits to computed hydrocarbons in place and thence to estimates of ultimate recovery. The problem is overcome here by using, as proxy groundtruths, core porosities that have been upscaled to match the spatial resolutions of porosity logs. Matrix and fluid properties are established over clean intervals in the usual way. Log-derived values of Vsh are tuned so that, on average, the resulting log-derived porosities match the corresponding core porosities over an evaluation interval. In this way, Vsh is rendered fit for purpose as an indicator of clay-mineral content Vcm for purposes of evaluating effective porosity. The method is conditioned to deliver a value of effective porosity that shows overall agreement with core porosity to within the limits of uncertainty of the laboratory measurements. This is achieved through function-, reservoir- and tool-specific Vsh reduction factors that can be applied to downhole estimates of clay-mineral content over uncored intervals of similar reservoir character. As expected, the reduction factors can also vary for different measurement conditions. The reduction factors lie in the range of 0.29-0.80, which means that in its raw form, log-derived Vsh can overestimate the clay-mineral content by more than a factor of three. This
Withers, R T; Ball, C T
1988-02-01
The body density (BD), and hence the relative body fat (% BF) was measured for 182 female athletes. The residual volume (RV) was determined both before and after the underwater weighing by a multiple breath helium dilution technique with the subject immersed to neck level. The absolute mean difference (lXdl) and SEE between the two RV trials were 63 and 75 ml, respectively. These increased to values ranging 144-685 and 187-252 ml, respectively, when the mean of the two RV trials for each subject was compared with the RVs predicted via regression equations, estimated from the vital capacity (VC) and assumed to be a constant of 1000 ml. A similar trend resulted from variation of only the RV in the BD formula for each subject. The two RV trials resulted in an lXdl and SEE of .00121 (.5% BF) and .00141 g.cm-3 (.6% BF), respectively, but these increased to values ranging .00283 (1.3% BF) -.01291 (5.7% BF) and .00362 (1.6% BF) -.00527 g.cm-3 (2.5% BF), respectively, for predicted, estimated and assumed constant RVs. In all cases, the lowest lXdl and SEE were associated with the RVs predicted by a multiple regression equation (R = .725; SEE = 187 ml) which was generated on our sample while the largest lXdl values were registered by the other regression equations. These data emphasize that the use of predicted, estimated and constant RVs result in substantial errors in BD and % BF compared with those when the RV is measured.
Jacob J. Jacobson; Erin Searcy; Md. S. Roni; Sandra D. Eksioglu
2014-06-01
This article analyzes rail transportation costs of products that have similar physical properties as densified biomass and biofuel. The results of this cost analysis are useful to understand the relationship and quantify the impact of a number of factors on rail transportation costs of denisfied biomass and biofuel. These results will be beneficial and help evaluate the economic feasibility of high-volume and long-haul transportation of biomass and biofuel. High-volume and long-haul rail transportation of biomass is a viable transportation option for biofuel plants, and for coal plants which consider biomass co-firing. Using rail optimizes costs, and optimizes greenhouse gas (GHG) emissions due to transportation. Increasing bioenergy production would consequently result in lower GHG emissions due to displacing fossil fuels. To estimate rail transportation costs we use the carload waybill data, provided by Department of Transportation’s Surface Transportation Board for products such as grain and liquid type commodities for 2009 and 2011. We used regression analysis to quantify the relationship between variable transportation unit cost ($/ton) and car type, shipment size, rail movement type, commodity type, etc. The results indicate that: (a) transportation costs for liquid is $2.26/ton–$5.45/ton higher than grain type commodity; (b) transportation costs in 2011 were $1.68/ton–$5.59/ton higher than 2009; (c) transportation costs for single car shipments are $3.6/ton–$6.68/ton higher than transportation costs for multiple car shipments of grains; (d) transportation costs for multiple car shipments are $8.9/ton and $17.15/ton higher than transportation costs for unit train shipments of grains.
Star, Hazha; Thevissen, Patrick; Jacobs, Reinhilde; Fieuws, Steffen; Solheim, Tore; Willems, Guy
2011-01-01
Secondary dentine is responsible for a decrease in the volume of the dental pulp cavity with aging. The aim of this study is to evaluate a human dental age estimation method based on the ratio between the volume of the pulp and the volume of its corresponding tooth, calculated on clinically taken cone beam computed tomography (CBCT) images from monoradicular teeth. On the 3D images of 111 clinically obtained CBCT images (Scanora(®) 3D dental cone beam unit) of 57 female and 54 male patients ranging in age between 10 and 65 years, the pulp-tooth volume ratio of 64 incisors, 32 canines, and 15 premolars was calculated with Simplant(®) Pro software. A linear regression model was fit with age as dependent variable and ratio as predictor, allowing for interactions of specific gender or tooth type. The obtained pulp-tooth volume ratios were the strongest related to age on incisors.
Mayhew, T M
1989-01-01
A stereological method for estimating the mean volumes of particles independent of assumptions about their shapes is illustrated using neurons in the ventral horn of rat cervical spinal cord. Male rats of 20 and 120 days post partum age were killed by intracardiac perfusion with formaldehyde/glutaraldehyde solutions. Cervical enlargements were removed, trimmed and embedded in resin. Randomised sections of ventral horn were photographed in a systematic pattern and used to estimate the volume-weighted mean volumes of neuronal perikarya and their nuclei. Volumes were estimated from point-sampled intercepts using rulers to classify intercept lengths. Classifying motoneuron perikarya was extremely reproducible, group means (coefficients of variation) at 120 days post partum being 25,190 microns3 (23%) and 24,250 microns3 (25%) in two separate trials. Classifying all neurons at the same age gave values of 20,520 microns3 (22%) and 22,490 microns3 (28%) in two trials. The mean perikaryal volumes of motoneurons at 20 and 120 days of age were not significantly different but nuclear volumes increased from 1,580 microns3 (16%) at 20 days to 2,660 microns3 (28%) at 120 days. These results illustrate the value of the method for obtaining unbiased and efficient estimates of the sizes of irregular perikarya and their nuclei. The benefit is that sizes can be estimated without biases due to simplifying assumptions about perikaryal/nuclear shape or nucleolar location. The influence of section thickness (even of thick paraffin sections) on the estimates is also negligible. Images Fig. 1 PMID:2808127
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2013 CFR
2013-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2011 CFR
2011-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2010 CFR
2010-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
NASA Astrophysics Data System (ADS)
Trofymow, J. A.; Coops, N.; Hayhurst, D.
2012-12-01
Following forest harvest, residues left on site and roadsides are often disposed of to reduce fire risk and free planting space. In coastal British Columbia burn piles are the main method of disposal, particularly for accumulations from log processing. Quantification of residue wood in piles is required for: smoke emission estimates, C budget calculations, billable waste assessment, harvest efficiency monitoring, and determination of bioenergy potentials. A second-growth Douglas-fir dominated (DF1949) site on eastern Vancouver Island and subject of C flux and budget studies since 1998, was clearcut in winter 2011, residues piled in spring and burned in fall. Prior to harvest, the site was divided into 4 blocks to account for harvest plans and ecosite conditions. Total harvested wood volume was scaled for each block. Residue pile wood volume was determined by a standard Waste and Residue Survey (WRS) using field estimates of pile base area and plot density (wood volume / 0.005 ha plot) on 2 piles per block, by a smoke emissions geometric method with pile volumes estimated as ellipsoidal paraboloids and packing ratios (wood volume / pile volume) for 2 piles per block, as well as by five other GIS methods using pile volumes and areas from LiDAR and orthophotography flown August 2011, a LiDAR derived digital elevation model (DEM) from 2008, and total scaled wood volumes of 8 sample piles disassembled November 2011. A weak but significant negative relationship was found between pile packing ratio and pile volume. Block level avoidable+unavoidable residue pile wood volumes from the WRS method (20.0 m3 ha-1 SE 2.8) were 30%-50% of the geometric (69.0 m3 ha-1 SE 18.0) or five GIS/LiDAR (48.0 to 65.7 m3 ha-1 ) methods. Block volumes using the 2008 LiDAR DEM (unshifted 48.0 m3 ha-1 SE 3.9, shifted 53.6 m3 ha-1 SE 4.2) to account for pre-existing humps or hollows beneath piles were not different from those using the 2011 LiDAR DEM (50.3 m3 ha-1 SE 4.0). The block volume ratio
Hinaman, Kurt
2005-01-01
The Powder River Basin in Wyoming and Montana is an important source of energy resources for the United States. Coalbed methane gas is contained in Tertiary and upper Cretaceous hydrogeologic units in the Powder River Basin. This gas is released when water pressure in coalbeds is lowered, usually by pumping ground water. Issues related to disposal and uses of by-product water from coalbed methane production have developed, in part, due to uncertainties in hydrologic properties. One hydrologic property of primary interest is the amount of water contained in Tertiary and upper Cretaceous hydrogeologic units in the Powder River Basin. The U.S. Geological Survey, in cooperation with the Bureau of Land Management, conducted a study to describe the hydrogeologic framework and to estimate ground-water volumes in different facies of Tertiary and upper Cretaceous hydrogeologic units in the Powder River Basin in Wyoming. A geographic information system was used to compile and utilize hydrogeologic maps, to describe the hydrogeologic framework, and to estimate the volume of ground water in Tertiary and upper Cretaceous hydrogeologic units in the Powder River structural basin in Wyoming. Maps of the altitudes of potentiometric surfaces, altitudes of the tops and bottoms of hydrogeologic units, thicknesses of hydrogeologic units, percent sand of hydrogeologic units, and outcrop boundaries for the following hydrogeologic units were used: Tongue River-Wasatch aquifer, Lebo confining unit, Tullock aquifer, Upper Hell Creek confining unit, and the Fox Hills-Lower Hell Creek aquifer. Literature porosity values of 30 percent for sand and 35 percent for non-sand facies were used to calculate the volume of total ground water in each hydrogeologic unit. Literature specific yield values of 26 percent for sand and 10 percent for non-sand facies, and literature specific storage values of 0.0001 ft-1 (1/foot) for sand facies and 0.00001 ft-1 for non-sand facies, were used to calculate a
OLI/ESP Modeling Of The Semi-Integrated Pilot Plant For Estimate Of Campaigns I-IV Simulant Volumes
CARL, BARNES
2004-08-01
Four SIPP campaigns have been planned to investigate the effect of recycle streams on the RPP-WTP pretreatment process such as the filter flux rate and other areas of interest. This document describes OLI/ESP modeling work done in support of the planning and operation of the SIPP. An existing OLI/ESP steady-state model was expanded to represent the pretreatment system through to the TLP evaporator for the LAW train and the washed sludge for the HLW train. The model was used to investigate alternative operating scenarios, determine the optimum volumetric waste feed ratio of AP-101 to AY-102, and, for each campaign, estimate the simulant and input recycle volumes corresponding to the target glass production rates of 6MT/day HLW glass and 80MT/day LAW glass and scaled to the target of 140L of Campaign I washed sludge. It was designed to quickly achieve steady state and simulation results indicate this was accomplished by Campaign IV. The alternative operating scenarios modeled differed only in the point at which the AP-101 and AY-102 waste feed streams were introduced to the process. The results showed no difference in the production rate between the scenarios. Therefore, for these specific waste feeds the process should be operated to maximize the energy efficiency and minimize scaling in the evaporator by feeding the AY-102 waste feed to the ultra-filtration feed prep tank, bypassing the waste feed evaporator.
Richards, Joseph M.; Green, W. Reed
2013-01-01
Millwood Lake, in southwestern Arkansas, was constructed and is operated by the U.S. Army Corps of Engineers (USACE) for flood-risk reduction, water supply, and recreation. The lake was completed in 1966 and it is likely that with time sedimentation has resulted in the reduction of storage capacity of the lake. The loss of storage capacity can cause less water to be available for water supply, and lessens the ability of the lake to mitigate flooding. Excessive sediment accumulation also can cause a reduction in aquatic habitat in some areas of the lake. Although many lakes operated by the USACE have periodic bathymetric and sediment surveys, none have been completed for Millwood Lake. In March 2013, the U.S. Geological Survey (USGS), in cooperation with the USACE, surveyed the bathymetry of Millwood Lake to prepare an updated bathymetric map and area/capacity table. The USGS also collected sediment thickness data in June 2013 to estimate the volume of sediment accumulated in the lake.
NASA Astrophysics Data System (ADS)
Shamsalsadati, Sharmin; Weiss, Chester J.
2012-09-01
From a theoretical perspective, perfect Green's function recovery in diffusive systems is based on cross-correlation of time-series measured at distinct locations arising from background fluctuations from an infinite set of uncorrelated sources, either naturally occurring or engineered. Clearly such a situation is impossible in practice, and a relevant question to ask, then, is how does an imperfect set of noise sources affect the quality of the resulting empirical Green's function (EGF)? We narrow down this broad question by exploring the effect of source location and make no distinction between whether the noise sources are natural or man made. Following the theory of EGF recovery, the only requirement is that the sources are uncorrelated and endowed with the same (or nearly so) frequency spectrum and amplitude. As such, our intuition suggests that noise sources proximal to the observation points are likely to contribute more to the Green's function estimate than distal ones. However, in what manner and over what spatial extent our intuition is less clear. Thus, in this short note we specifically ask the question, 'Where are the noise sources that contribute most to the Green's function estimate in heterogeneous, lossy systems?' We call such a region the volume of relevance (VoR). Our analysis builds upon recent work on 1-D homogeneous systems by examining the effect of heterogeneity, dimensionality and receiver location in both one and two dimensions. Following the strategy of previous work in the field, the analysis is conducted out of mathematical convenience in the frequency domain although we stress that the sources need not be monochromatic. We find that for receivers located symmetrically across an interface between regions of contrasting diffusivity, the VoR rapidly shifts from one side of the interface to the other, and back again, as receiver separation increases. For the case where the receiver pair is located on the interface itself, the shifting is
Layec, Gwenael; Venturelli, Massimo; Jeong, Eun-Kee; Richardson, Russell S
2014-05-01
The assessment of muscle volume, and changes over time, have significant clinical and research-related implications. Methods to assess muscle volume vary from simple and inexpensive to complex and expensive. Therefore this study sought to examine the validity of muscle volume estimated simply by anthropometry compared with the more complex proton magnetic resonance imaging ((1)H-MRI) across a wide spectrum of individuals including those with a spinal cord injury (SCI), a group recognized to exhibit significant muscle atrophy. Accordingly, muscle volume of the thigh and lower leg of eight subjects with a SCI and eight able-bodied subjects (controls) was determined by anthropometry and (1)H-MRI. With either method, muscle volumes were significantly lower in the SCI compared with the controls (P < 0.05) and, using pooled data from both groups, anthropometric measurements of muscle volume were strongly correlated to the values assessed by (1)H-MRI in both the thigh (r(2) = 0.89; P < 0.05) and lower leg (r(2) = 0.98; P < 0.05). However, the anthropometric approach systematically overestimated muscle volume compared with (1)H-MRI in both the thigh (mean bias = 2407cm(3)) and the lower (mean bias = 170 cm(3)) leg. Thus with an appropriate correction for this systemic overestimation, muscle volume estimated from anthropometric measurements is a valid approach and provides acceptable accuracy across a spectrum of adults with normal muscle mass to a SCI and severe muscle atrophy. In practical terms this study provides the formulas that add validity to the already simple and inexpensive anthropometric approach to assess muscle volume in clinical and research settings.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Huizinga, Richard J.
2014-01-01
The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.
NASA Astrophysics Data System (ADS)
Feliciano, E. A.; Wdowinski, S.; Potts, M. D.
2012-12-01
Mangrove forests are being threatened by accelerated climate change, sea level rise and coastal projects. Carbon/above ground biomass (AGB) losses due to natural or human intervention can affect global warming. Thus, it is important to monitor AGB fluctuations in mangrove forests similar to those inhabiting the Everglades National Park (ENP). Tree volume and tree wood specific density are two important measurements for the estimation of AGB (mass = volume * density). Wood specific density is acquired in the laboratory by analyzing stem cores acquired in the field. However, tree volume is a challenging task because trees resemble tapered surfaces. The majority of published studies estimate tree volume and biomass using allometric equations, which describe the size, shape, volume or AGB of a given population of trees. However, these equations can be extremely general and might not give a representative value of volume or AGB for a specific tree species. In order to have precise biomass estimations, other methodologies for tree volume estimation are needed. To overcome this problem, we use a state-of-the-art remote sensing tool known as ground-based LiDAR a.k.a Terrestrial Laser Scanner (TLS), which can be used to precisely measure vegetation structure and tree volume from its 3-D point cloud. We surveyed three mangrove communities: (Rhizophora mangle, Laguncuria racemosa and Avicennia germinans) in three different sites along Shark River Slough (SRS), which is the primary source of water to the ENP. Our sites included: small-, intermediate- and tall- size mangroves. Our ground measurements included both: traditional forestry surveys and TLS surveys for tree attributes (tree height and diameter at breast height (DBH)) comparison. These attributes are used as input to allometric equations for the estimation of tree volume and AGB. A total of 25 scans were collected in 2011 with a Leica ScanStation C10 TLS. The 3-D point cloud acquired from the TLS data revealed that
Jagannathan, N; Neelakantan, P; Thiruvengadam, C; Ramani, P; Premkumar, P; Natesan, A; Herald, J S; Luder, H U
2011-07-01
The present study assessed the suitability of pulp/tooth volume ratio of mandibular canines for age prediction in an Indian population. Volumetric reconstruction of scanned images of mandibular canines from 140 individuals (aged ten - 70 years), using computed tomography was used to measure pulp and tooth volumes. Age calculated using a formula reported earlier for a Belgian sample, resulted in errors > ten years in almost 86% of the study population. The regression equation obtained for the Indian population: Age = 57.18 + (- 413.41 x pulp/tooth volume ratio), was applied to an independent control group (n = 48), and this resulted in mean absolute errors of 8.54 years which was significantly (p < 0.05) lower than those derived with the Belgian formula. The pulp/tooth volume ratio is a useful indicator of age, although correlations may vary in different populations and hence, specific formulae should be applied for the estimates. PMID:21841263
Ertl-Wagner, Birgit B; Blume, Jeffrey D; Peck, Donald; Udupa, Jayaram K; Herman, Benjamin; Levering, Anthony; Schmalfuss, Ilona M
2009-03-01
Reliable assessment of tumor growth in malignant glioma poses a common problem both clinically and when studying novel therapeutic agents. We aimed to evaluate two software-systems in their ability to estimate volume change of tumor and/or edema on magnetic resonance (MR) images of malignant gliomas. Twenty patients with malignant glioma were included from different sites. Serial post-operative MR images were assessed with two software systems representative of the two fundamental segmentation methods, single-image fuzzy analysis (3DVIEWNIX-TV) and multi-spectral-image analysis (Eigentool), and with a manual method by 16 independent readers (eight MR-certified technologists, four neuroradiology fellows, four neuroradiologists). Enhancing tumor volume and tumor volume plus edema were assessed independently by each reader. Intraclass correlation coefficients (ICCs), variance components, and prediction intervals were estimated. There were no significant differences in the average tumor volume change over time between the software systems (p > 0.05). Both software systems were much more reliable and yielded smaller prediction intervals than manual measurements. No significant differences were observed between the volume changes determined by fellows/neuroradiologists or technologists.Semi-automated software systems are reliable tools to serve as outcome parameters in clinical studies and the basis for therapeutic decision-making for malignant gliomas, whereas manual measurements are less reliable and should not be the basis for clinical or research outcome studies. PMID:18925402
Bielicka-Daszkiewicz, Katarzyna; Voelkel, Adam; Rusińska-Roszak, Danuta; Zarzycki, Paweł K
2013-03-01
SPE method is a very popular technique, and is commonly used for the prepurification, concentration, and isolation of different organic compounds from variable matrices. In this work, the optimization of SPE process was carried out. The breakthrough volume of solid sorbents based on octadecylsilane was determined and three methods were compared: (1) calculation one - the breakthrough volume was calculated using retention factor k determined with micro-TLC method, frontal analysis - (2) breakthrough volume was determined as volume of whole elution peak, and (3) breakthrough volume was determined as the center of peak gravity. For calculation method, the k values of key estrogens and progestogens were derived from the micro-TLC experiment reported previously. By combining these three methods, we can point the start of elution, the maximum concentration of analyte in eluate, and the whole eluent volume, which is necessary to achieve an appropriate selectivity and high extraction recovery. Proposed calculation method allows to estimate the beginning of the steroid peak, when the analyte appears in the eluate flowing from the sorbent. Such observation advances the SPE optimization protocol that was described before and was based on the correlation between raw k(SPE) and k(micro-TLC) data.
Hugues, B; Pietri, C; Andre, M
1985-12-01
Two titration methods for the quantification of viruses present in the environment are compared--plaque counting and determination of the most probable number with a large number of inocula at each dilution. Titration of virus suspensions and of sewage samples showed that, for a given volume of inoculum, in most cases there was no statistically significant difference between the virus titres given by the two methods. The precision of the results was the same for the two methods. When the volume of inoculum used at each dilution differed from one method to another, the width of the confidence interval increased as the volume of inoculum decreased.
NASA Technical Reports Server (NTRS)
Ripple, William J.; Wang, S.; Isaacson, Dennis L.; Paine, D. P.
1995-01-01
Digital Landsat Thematic Mapper (TM) and Satellite Probatoire d'Observation de la Terre (SPOT) High Resolution Visible (HRV) images of coniferous forest canopies were compared in their relationship to forest wood volume using correlation and regression analyses. Significant inverse relationships were found between softwood volume and the spectral bands from both sensors (P less than 0.01). The highest correlations were between the log of softwood volume and the near-infrared bands (HRV band 3, r = -0.89; TM band 4, r = -0.83).
Bryan, J.L.; Wildhaber, M.L.; Papoulias, D.M.; DeLonay, A.J.; Tillitt, D.E.; Annis, M.L.
2007-01-01
Most species of sturgeon are declining in the Mississippi River Basin of North America including pallid (Scaphirhynchus albus F. and R.) and shovelnose sturgeons (S. platorynchus R.). Understanding the reproductive cycle of sturgeon in the Mississippi River Basin is important in evaluating the status and viability of sturgeon populations. We used non-invasive, non-lethal methods for examining internal reproductive organs of shovelnose and pallid sturgeon. We used an ultrasound to measure egg diameter, fecundity, and gonad volume; endoscope was used to visually examine the gonad. We found the ultrasound to accurately measure the gonad volume, but it underestimated egg diameter by 52%. After correcting for the measurement error, the ultrasound accurately measured the gonad volume but it was higher than the true gonad volume for stages I and II. The ultrasound underestimated the fecundity of shovelnose sturgeon by 5%. The ultrasound fecundity was lower than the true fecundity for stage III and during August. Using the endoscope, we viewed seven different egg color categories. Using a model selection procedure, the presence of four egg categories correctly predicted the reproductive stage ± one reproductive stage of shovelnose sturgeon 95% of the time. For pallid sturgeon, the ultrasound overestimated the density of eggs by 49% and the endoscope was able to view eggs in 50% of the pallid sturgeon. Individually, the ultrasound and endoscope can be used to assess certain reproductive characteristics in sturgeon. The use of both methods at the same time can be complementary depending on the parameter measured. These methods can be used to track gonad characteristics, including measuring Gonadosomatic Index in individuals and/or populations through time, which can be very useful when associating gonad characteristics with environmental spawning triggers or with repeated examinations of individual fish throughout the reproductive cycle.
Bryan, J.L.; Wildhaber, M.L.; Papoulias, D.M.; DeLonay, A.J.; Tillitt, D.E.; Annis, M.L.
2007-01-01
Most species of sturgeon are declining in the Mississippi River Basin of North America including pallid (Scaphirhynchus albus F. and R.) and shovelnose sturgeons (S. platorynchus R.). Understanding the reproductive cycle of sturgeon in the Mississippi River Basin is important in evaluating the status and viability of sturgeon populations. We used non-invasive, non-lethal methods for examining internal reproductive organs of shovelnose and pallid sturgeon. We used an ultrasound to measure egg diameter, fecundity, and gonad volume; endoscope was used to visually examine the gonad. We found the ultrasound to accurately measure the gonad volume, but it underestimated egg diameter by 52%. After correcting for the measurement error, the ultrasound accurately measured the gonad volume but it was higher than the true gonad volume for stages I and II. The ultrasound underestimated the fecundity of shovelnose sturgeon by 5%. The ultrasound fecundity was lower than the true fecundity for stage III and during August. Using the endoscope, we viewed seven different egg color categories. Using a model selection procedure, the presence of four egg categories correctly predicted the reproductive stage ?? one reproductive stage of shovelnose sturgeon 95% of the time. For pallid sturgeon, the ultrasound overestimated the density of eggs by 49% and the endoscope was able to view eggs in 50% of the pallid sturgeon. Individually, the ultrasound and endoscope can be used to assess certain reproductive characteristics in sturgeon. The use of both methods at the same time can be complementary depending on the parameter measured. These methods can be used to track gonad characteristics, including measuring Gonadosomatic Index in individuals and/or populations through time, which can be very useful when associating gonad characteristics with environmental spawning triggers or with repeated examinations of individual fish throughout the reproductive cycle. ?? 2007 Blackwell Verlag.
Not Available
1989-01-01
This volume is one in a series of manuals prepared for EPA to assist its Remedial Project Managers in the assessment of the air contaminant pathway and developing input data for risk assessment. The manual provides guidance on developing baseline-emission estimates from hazardous waste sites. Baseline-emission estimates (BEEs) are defined as emission rates estimated for a site in its undisturbed state. Specifically, the manual is intended to: Present a protocol for selecting the appropriate level of effort to characterize baseline air emissions; Assist site managers in designing an approach for BEEs; Describe useful technologies for developing site-specific baseline emission estimates (BEEs); Help site managers select the appropriate technologies for generating site-specific BEEs.
NASA Technical Reports Server (NTRS)
Roddy, D. J.; Shoemaker, E. M.; Anderson, R. R.
1993-01-01
A research program on the Manson impact structure has substantially improved our knowledge of the detailed features of this eroded crater. As part of our structural studies, we have derived a value of 21 km for the diameter of the final transient cavity formed during crater excavation. With this information, we can estimate the energy of formation of the Manson crater and the possible size of the impacting asteroid or comet. In addition, we have estimated the near- and far-field ejecta volumes and masses.
Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.
2011-01-01
During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for
Sedlacik, Jan; Reichenbach, Jürgen R
2010-04-01
The blood oxygenation level dependent signal of cerebral tissue can be theoretically derived using a network model formed by randomly oriented infinitely long cylinders. The validation of this model by phantom and in vivo experiments is still an object of research. A network phantom was constructed of solid polypropylene strings immersed in silicone oil, which essentially eliminated the effect of spin diffusion. The volume fraction and magnetic property of the string network was predetermined by independent methods. Ten healthy volunteers were measured for in vivo demonstration. The gradient echo sampled spin echo signal was evaluated with the cylinder network model. We found a strong interdependency between the two network characterizing parameters deoxygenated blood volume and oxygen extraction fraction. Here, different sets of deoxygenated blood volume/oxygen extraction fraction values were able to describe the measured signal equally well. However, by setting one parameter constant to a predetermined value, reasonable estimates of the other parameter were obtained. The same behavior was found for the in vivo demonstration. The signal theory of the cylinder network was validated by a well-characterized phantom. However, the found interdependency that was found between deoxygenated blood volume and oxygen extraction fraction requires an independent estimation of one variable to determine reliable values of the other parameter. PMID:20373392
NASA Astrophysics Data System (ADS)
Del Gobbo, Costanza; Colucci, Renato R.; Forte, Emanuele; Triglav Čekada, Michaela; Zorn, Matija
2016-08-01
It is well known that small glaciers of mid latitudes and especially those located at low altitude respond suddenly to climate changes both on local and global scale. For this reason their monitoring as well as evaluation of their extension and volume is essential. We present a ground penetrating radar (GPR) dataset acquired on September 23 and 24, 2013 on the Triglav glacier to identify layers with different characteristics (snow, firn, ice, debris) within the glacier and to define the extension and volume of the actual ice. Computing integrated and interpolated 3D using the whole GPR dataset, we estimate that at the moment of data acquisition the ice area was 3800 m2 and the ice volume 7400 m3. Its average thickness was 1.95 m while its maximum thickness was slightly more than 5 m. Here we compare the results with a previous GPR survey acquired in 2000. A critical review of the historical data to find the general trend and to forecast a possible evolution is also presented. Between 2000 and 2013, we observed relevant changes in the internal distribution of the different units (snow, firn, ice) and the ice volume reduced from about 35,000 m3 to about 7400 m3. Such result can be achieved only using multiple GPR surveys, which allow not only to assess the volume occupied by a glacial body, but also to image its internal structure and the actual ice volume. In fact, by applying one of the widely used empirical volume-area relations to infer the geometrical parameters of the glacier, a relevant underestimation of ice-loss would be achieved.
NASA Astrophysics Data System (ADS)
Egorova, Tatiana; Gatsonis, Nikolaos A.; Demetriou, Michael A.
2013-11-01
In this work the process of gas release into the atmosphere by a moving aerial source is simulated and estimated using a sensing aerial vehicle (SAV). The process is modeled with atmospheric advection diffusion equation, which is solved by the finite volume method (FVM). Advective fluxes are constrained using total variation diminishing (TVD) approach. The estimator provides on-line estimates of concentration field and proximity of the source. The guidance of the SAV is dictated by the performance of the estimator. To further improve the estimation algorithm from the computational prospective, the grid is adapted dynamically through local refinement and coarsening. The adaptation algorithm uses the current sensor position as a center of refinement, with the areas further away from the SAV being covered by a coarse grid. This leads to the time varying state matrix of the estimator and the variation depends on the SAV motion. Advantages of the adaptive FVM-TVD implementation are illustrated on the examples of estimator performance for different source trajectories.
Bigum, Lene Hyldgaard; Ulriksen, Peter Sommer; Omar, Omar Salah
2016-10-01
This study describes and evaluates the use of non-contrast enhanced computerized tomography (NCCT) before and after extracorporeal shockwave lithotripsy (SWL). Computer measured stone volume was used as an exact measurement for treatment response. 81 patients received SWL of kidney stones at Herlev Hospital between April 2013 and January 2014 and follow-up was possible in 77 (95 %) patients. NCCT was used before and after treatment. Treatment response was expressed as a reduction of the stone volume. Stone characteristics as the stone volumes, HU, SSD and localization were measured by radiologist using a vendor non-specific computer program. Complications, patient characteristics and additional treatment were registered. On average, 5858 shocks were given each patient. The follow-up NCCT was performed 24 days after treatment. It was possible to calculate the stone volume in 88 % of the patients-the remaining 12 % it was not possible due to stone disintegration. The stone free rate was 22 %. The average relative reduction in stone burden was 62 %. Only 8 % of the patients were radiological non-responders. Steinstrasse was observed in 13 (17 %) and 28 (36 %) patients had additional treatment performed. Irradiation dose per NCCT was 2.6 mSv. Stone volume could be calculated in most patients. The relative reduction in stone burden after treatment was 62 %. The stone volume was redundant when evaluating stone free patients, but in cases of partial response it gave an exact quantification, to be used in the further management and follow-up of the patients.
Bigum, Lene Hyldgaard; Ulriksen, Peter Sommer; Omar, Omar Salah
2016-10-01
This study describes and evaluates the use of non-contrast enhanced computerized tomography (NCCT) before and after extracorporeal shockwave lithotripsy (SWL). Computer measured stone volume was used as an exact measurement for treatment response. 81 patients received SWL of kidney stones at Herlev Hospital between April 2013 and January 2014 and follow-up was possible in 77 (95 %) patients. NCCT was used before and after treatment. Treatment response was expressed as a reduction of the stone volume. Stone characteristics as the stone volumes, HU, SSD and localization were measured by radiologist using a vendor non-specific computer program. Complications, patient characteristics and additional treatment were registered. On average, 5858 shocks were given each patient. The follow-up NCCT was performed 24 days after treatment. It was possible to calculate the stone volume in 88 % of the patients-the remaining 12 % it was not possible due to stone disintegration. The stone free rate was 22 %. The average relative reduction in stone burden was 62 %. Only 8 % of the patients were radiological non-responders. Steinstrasse was observed in 13 (17 %) and 28 (36 %) patients had additional treatment performed. Irradiation dose per NCCT was 2.6 mSv. Stone volume could be calculated in most patients. The relative reduction in stone burden after treatment was 62 %. The stone volume was redundant when evaluating stone free patients, but in cases of partial response it gave an exact quantification, to be used in the further management and follow-up of the patients. PMID:26914829
NASA Astrophysics Data System (ADS)
Garson, Christopher D.; Li, Bing; Hossack, John A.
2007-03-01
Active contours have been used in a wide variety of image processing applications due to their ability to effectively distinguish image boundaries with limited user input. In this paper, we consider 3D gradient vector field (GVF) active surfaces and their application in the determination of the volume of the mouse heart left ventricle. The accuracy and efficacy of a 3D active surface is strongly dependent upon the selection of several parameters, corresponding to the tension and rigidity of the active surface and the weight of the GVF. However, selection of these parameters is often subjective and iterative. We observe that the volume of the cardiac muscle is, to a good approximation, conserved through the cardiac cycle. Therefore, we propose using the degree of conservation of heart muscle volume as a metric for assessing optimality of a particular set of active surface parameters. A synthetic dataset consisting of nested ellipsoids of known volume was constructed. The outer ellipsoid contracted over time to imitate a heart cycle, and the inner ellipsoid compensated to maintain constant volume. The segmentation algorithm was also investigated in vivo using B-mode data sets obtained by scanning the hearts of three separate mice. Active surfaces were initialized using a broad range of values for each of the parameters under consideration. Conservation of volume was a useful predictor of the efficacy of the model for the range of values tested for the GVF weighting parameter, though it was less effective at predicting the efficacy of the active surface tension and rigidity parameters.
Vegetation cover and volume estimates in semi-arid rangelands using LiDAR and hyperspectral data
Technology Transfer Automated Retrieval System (TEKTRAN)
Sagebrush covers 1.1 x 106 km2 of North American rangelands and is an important cover type for many species. Like most vegetation, sagebrush cover and height varies across the landscape. Accurately mapping this variation is important for certain species, such as the greater sage-grouse, where sagebr...
Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data
NASA Technical Reports Server (NTRS)
Vanderesch, A. H.
1972-01-01
Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... published in the Federal Register on August 6, 2010 (75 FR 47490), on the use of an estimated trade demand... Register on August 6, 2010 (75 FR 47490), on the establishment of an estimated trade demand figure to..., 2010 (75 FR 47490), is hereby withdrawn. List of Subjects in 7 CFR Part 989 Grapes,...
Boreman, J.; Barnthouse, L.W.; Vaughn, D.S.; Goodyear, C.P.; Christensen, S.W.; Kumar, K.D.; Kirk, B.L.; Van Winkle, W.
1982-01-01
This volume is concerned with the estimation of the direct (or annual) entrainment impact of power plants on populations of striped bass, white perch, Alosa spp. (blueback herring and alewife), American shad, Atlantic tomcod, and bay anchovy in the Hudson River estuary. Entrainment impact results from the killing of fish eggs, larvae, and young juveniles that are contained in the cooling water cycled through a power plant. An Empirical Transport Model (ETM) is presented as the means of estimating a conditional entrainment mortality rate (defined as the fraction of a year class which would be killed due to entrainment in the absence of any other source of mortality). Most of this volume is concerned with the estimation of several parameters required by the ETM: physical input parameters (e.g., power-plant withdrawal flow rates); the longitudinal distribution of ichthyoplankton in time and space; the duration of susceptibility of the vulnerable organisms; the W-factors, which express the ratios of densities of organisms in power plant intakes to densities of organisms in the river; and the entrainment mortality factors (f-factors), which express the probability that an organism will be killed if it is entrained. Once these values are obtained, the ETM is used to estimate entrainment impact for both historical and projected conditions.
NASA Astrophysics Data System (ADS)
Caricchi, Luca; Simpson, Guy; Schaltegger, Urs
2016-04-01
Magma fluxes in the Earth's crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes). Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions. Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
NASA Astrophysics Data System (ADS)
Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.
2012-12-01
Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the
NASA Technical Reports Server (NTRS)
Rotto, Susan L.; Tanaka, Kenneth L.
1992-01-01
The Chryse Planitia region of Mars includes several outflow channels that debouched into a single basin. Here we evaluate possible volumes and areal extents of standing bodies of water that collected in the northern lowland plains, based on evidence provided by topography, fluvial relations, and channel chronology and geomorphology.
Lu, Zhiming; Fielding, E.; Patrick, M.R.; Trautwein, C.M.
2003-01-01
Interferometric synthetic aperture radar (InSAR) techniques are used to calculate the volume of extrusion at Okmok volcano, Alaska by constructing precise digital elevation models (DEMs) that represent volcano topography before and after the 1997 eruption. The posteruption DEM is generated using airborne topographic synthetic aperture radar (TOPSAR) data where a three-dimensional affine transformation is used to account for the misalignments between different DEM patches. The preeruption DEM is produced using repeat-pass European Remote Sensing satellite data; multiple interferograms are combined to reduce errors due to atmospheric variations, and deformation rates are estimated independently and removed from the interferograms used for DEM generation. The extrusive flow volume associated with the 1997 eruption of Okmok volcano is 0.154 ?? 0.025 km3. The thickest portion is approximately 50 m, although field measurements of the flow margin's height do not exceed 20 m. The in situ measurements at lava edges are not representative of the total thickness, and precise DEM data are absolutely essential to calculate eruption volume based on lava thickness estimations. This study is an example that demonstrates how InSAR will play a significant role in studying volcanoes in remote areas.
Urbanowicz, J.H.; Shaaban, M.J.; Cohen, N.H.; Cahalan, M.K.; Botvinick, E.H.; Chatterjee, K.; Schiller, N.B.; Dae, M.W.; Matthay, M.A. )
1990-04-01
Transesophageal echocardiography (TEE) has become a commonly used monitor of left ventricular (LV) function and filling during cardiac surgery. Its use is based on the assumption that changes in LV short-axis ID reflect changes in LV volume. To study the ability of TEE to estimate LV volume and ejection immediately following CABG, 10 patients were studied using blood pool scintigraphy, TEE, and thermodilution cardiac output (CO). A single TEE short-axis cross-sectional image of the LV at the midpapillary muscle level was used for area analysis. Between 1 and 5 h postoperatively, simultaneous data sets (scintigraphy, TEE, and CO) were obtained three to five times in each patient. End-diastolic (EDa) and end-systolic (ESa) areas were measured by light pen. Ejection fraction area (EFa) was calculated (EFa = (EDa - ESa)/EDa). When EFa was compared with EF by scintigraphy, correlation was good (r = 0.82 SEE = 0.07). EDa was taken as an indicator of LV volume and compared with LVEDVI which was derived from EF by scintigraphy and CO. Correlation between EDa and LVEDVI was fair (r = 0.74 SEE = 3.75). The authors conclude that immediately following CABG, a single cross-sectional TEE image provides a reasonable estimate of EF but not LVEDVI.
Decostre, P.L.; Salmon, Y. )
1990-10-01
An original approach to background subtraction is presented for 99mTc-DTPA separate glomerular filtration rate (SGFR) estimation in man. The method is based on the properties of the peripheral organ distribution volume (PODV) in mammillary systems. These PODV properties allow easy separation of the components of the renogram, i.e., interstitial fluid, plasma and renal activities. The proposed algorithm takes advantage of the linear time dependence of the kidney distribution volume, during the renal uptake phase, to correct for the plasma residual activity, which always remains after classical background correction. Theoretically, the ratio between kidney uptake and SGFR should be identical for both left and right kidneys, even for very asymmetrical kidney functions. This is best verified when the proposed plasma residual activity correction is applied.
Sasaki, Tomohiko; Kondo, Osamu
2016-09-01
Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years.
NASA Astrophysics Data System (ADS)
Lemieux, Louis
2001-07-01
A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
Stevens, Michael R.; Bossong, Clifford R.; Litke, David W.; Viger, Roland J.; Rupert, Michael G.; Char, Stephen J.
2008-01-01
Debris flows pose substantial threats to life, property, infrastructure, and water resources. Post-wildfire debris flows may be of catastrophic proportions compared to debris flows occurring in unburned areas. During 2006, the U.S. Geological Survey (USGS), in cooperation with the Northern Colorado Water Conservancy District, initiated a pre-wildfire study to determine the potential for post-wildfire debris flows in the Three Lakes watershed, Grand County, Colorado. The objective was to estimate the probability of post-wildfire debris flows and to estimate the approximate volumes of debris flows from 109 subbasins in the Three Lakes watershed in order to provide the Northern Colorado Water Conservancy District with a relative measure of which subbasins might constitute the most serious debris flow hazards. This report describes the results of the study and provides estimated probabilities of debris-flow occurrence and the estimated volumes of debris flow that could be produced in 109 subbasins of the watershed under an assumed moderate- to high-burn severity of all forested areas. The estimates are needed because the Three Lakes watershed includes communities and substantial water-resources and water-supply infrastructure that are important to residents both east and west of the Continental Divide. Using information provided in this report, land and water-supply managers can consider where to concentrate pre-wildfire planning, pre-wildfire preparedness, and pre-wildfire mitigation in advance of wildfires. Also, in the event of a large wildfire, this information will help managers identify the watersheds with the greatest post-wildfire debris-flow hazards.
NASA Astrophysics Data System (ADS)
Dietze, Michael; Mohadjer, Solmaz; Turowski, Jens; Ehlers, Todd; Hovius, Niels
2016-04-01
Rockfall activity in steep alpine landscapes is often difficult to survey due to its infrequent nature. Classic approaches are limited by temporal and spatial resolution. In contrast, seismic monitoring provides access to catchment-wide analysis of activity patterns in rockfall-dominated environments. The deglaciated U-shaped Lauterbrunnen Valley in the Bernese Oberland, Switzerland, is a perfect example of such landscapes. It was instrumented with up to six broadband seismometers and repeatedly surveyed by terrestrial LiDAR to provide independent validation data. During August-October 2014 and April-June 2015 more than 23 (LiDAR) to hundred (seismic) events were detected. Their volumes range from < 0.01 to 5.80 cubic metres as detected by LiDAR. The evolution of individual events (i.e., precursor activity, detachment, falling phase, impact, talus cone activity) can be quantified in terms of location and duration. For events that consist of single detachments rather than a series of releases, volume scaling relationships are possible. Seismic monitoring approaches are well-suited for studying not only the rockfall process but also for understanding the geomorphic framework and boundary conditions that control such processes in a comprehensive way. Taken together, the combined LiDAR and seismic monitoring approach provides high fidelity spatial and temporal resolution of individual events.
Hammond, P.E.; Korosec, M.A.
1983-12-01
Data collected over the last three years as part of a continuing study of the Quaternary volcanic rocks of the southern Cascade Mountains are presented. Whole-rock chemical analyses, selected trace element geochemistry, volume approximations, specific gravity determinations, and locations are provided for most of the 103 samples collected, and 21 radiometric age dates are included. In addition, partial information, including names and flow-volumes, are presented for 98 additional samples, collected for related studies. The study extends from the Columbia River north to the Cowlitz River and Goat Rocks Wilderness area, and from the Klickitat River west to the Puget-Willamette Trough. The volcanic rocks are all younger than 3 million years and consist primarily of tholeiitic and high-alumina basalts and basaltic-andesites erupted from numerous shield volcanoes and cinder cones. A few analyses of more silicic rocks, including hornblende and/or pyroxene andesites and dacites characteristic of the stratovolcanoes of the region, are also presented. However, systematic sampling of the stratovolcanoes in the study area, Mount Adams and Mount St. Helens, was not conducted. A map of the areal extent of Quaternary volcanic units and sample locations is included. It has been based on the 1:125,000 reconnaissance geologic map of the southern Cascade Range by Hammond (1980).