Logan, Corina J; Palmstrom, Christin R
2015-01-01
There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858
Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne
2016-08-01
Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967
Direct volume estimation without segmentation
NASA Astrophysics Data System (ADS)
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
Organ volume estimation using SPECT
Zaidi, H.
1996-06-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang`s algorithm. The dual window method was used for scatter subtraction. The author used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of (1) fixed thresholding, (2) automatic thresholding, (3) attenuation, (4) scatter, and (5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are performed. The relative error is within 7% for the GLH method combined with attenuation and scatter corrections.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION
Technology Transfer Automated Retrieval System (TEKTRAN)
Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...
31 CFR 205.24 - How are accurate estimates maintained?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Micromagnetometer calibration for accurate orientation estimation.
Zhang, Zhi-Qiang; Yang, Guang-Zhong
2015-02-01
Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625
Age estimation from canine volumes.
De Angelis, Danilo; Gaudio, Daniel; Guercini, Nicola; Cipriani, Filippo; Gibelli, Daniele; Caputi, Sergio; Cattaneo, Cristina
2015-08-01
Techniques for estimation of biological age are constantly evolving and are finding daily application in the forensic radiology field in cases concerning the estimation of the chronological age of a corpse in order to reconstruct the biological profile, or of a living subject, for example in cases of immigration of people without identity papers from a civil registry. The deposition of teeth secondary dentine and consequent decrease of pulp chamber in size are well known as aging phenomena, and they have been applied to the forensic context by the development of age estimation procedures, such as Kvaal-Solheim and Cameriere methods. The present study takes into consideration canines pulp chamber volume related to the entire teeth volume, with the aim of proposing new regression formulae for age estimation using 91 cone beam computerized scans and a freeware open-source software, in order to permit affordable reproducibility of volumes calculation. PMID:25698302
Modeling of landslide volume estimation
NASA Astrophysics Data System (ADS)
Amirahmadi, Abolghasem; Pourhashemi, Sima; Karami, Mokhtar; Akbari, Elahe
2016-06-01
Mass displacement of materials such as landslide is considered among problematic phenomena in Baqi Basin located at southern slopes of Binaloud, Iran; since, it destroys agricultural lands and pastures and also increases deposits at the basin exit. Therefore, it is necessary to identify areas which are sensitive to landslide and estimate the significant volume. In the present study, in order to estimate the volume of landslide, information about depth and area of slides was collected; then, considering regression assumptions, a power regression model was given which was compared with 17 suggested models in various regions in different countries. The results showed that values of estimated mass obtained from the suggested model were consistent with observed data (P value= 0.000 and R = 0.692) and some of the existing relations which implies on efficiency of the suggested model. Also, relations that were created in small-area landslides were more suitable rather than the ones created in large-area landslides for using in Baqi Basin. According to the suggested relation, average depth value of landslides was estimated 3.314 meters in Baqi Basin which was close to the observed value, 4.609 m.
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Sonographic pleural fluid volume estimation in cats.
Shimali, Jerry; Cripps, Peter J; Newitt, Anna L M
2010-02-01
The aims of this study were to evaluate whether a recently published study used to objectively monitor pleural fluid volumes in dogs could be successfully employed in cats and secondly to assess its accuracy. Eleven feline cadavers were selected. Using the trans-sternal view employed in dogs, linear measurements from the pleural surface of the midline of the sternebra at the centre of the heart to the furthest ventro-lateral point of both right and left lung edges were recorded. Isotonic saline was injected using ultrasound guidance into both right and left pleural spaces and the measurements were repeated using standard increments until 400 ml total volume was reached. The mean measurement increased in a linear relationship with the cube root of fluid volume for all cats individually. An equation was produced to predict the volume of fluid from the mean linear measurement for all cats combined: Volume=[-3.75+2.41(mean)](3)(P<0.001) but variability in the slope of the curve for individual cats limited the accuracy of the combined equation. Equations were derived to predict the constant and slope of the curve for individual cats using the thoracic measurements made, but the residual diagnostic graphs demonstrated considerable variability. As in dogs, good correlation was found between the ultrasonographic measurement and fluid volume within individual cats. An accurate equation to predict absolute pleural fluid volume was not identified. Further analysis with reference to thoracic measurements did not increase accuracy. In conclusion, this study does provide a method of estimating absolute pleural fluid volume in cats, which may be clinical useful for pleural fluid volume monitoring but this is yet to be validated in live cats. PMID:19744872
Estimation of feline renal volume using computed tomography and ultrasound.
Tyson, Reid; Logsdon, Stacy A; Werre, Stephen R; Daniel, Gregory B
2013-01-01
Renal volume estimation is an important parameter for clinical evaluation of kidneys and research applications. A time efficient, repeatable, and accurate method for volume estimation is required. The purpose of this study was to describe the accuracy of ultrasound and computed tomography (CT) for estimating feline renal volume. Standardized ultrasound and CT scans were acquired for kidneys of 12 cadaver cats, in situ. Ultrasound and CT multiplanar reconstructions were used to record renal length measurements that were then used to calculate volume using the prolate ellipsoid formula for volume estimation. In addition, CT studies were reconstructed at 1 mm, 5 mm, and 1 cm, and transferred to a workstation where the renal volume was calculated using the voxel count method (hand drawn regions of interest). The reference standard kidney volume was then determined ex vivo using water displacement with the Archimedes' principle. Ultrasound measurement of renal length accounted for approximately 87% of the variability in renal volume for the study population. The prolate ellipsoid formula exhibited proportional bias and underestimated renal volume by a median of 18.9%. Computed tomography volume estimates using the voxel count method with hand-traced regions of interest provided the most accurate results, with increasing accuracy for smaller voxel sizes in grossly normal kidneys (-10.1 to 0.6%). Findings from this study supported the use of CT and the voxel count method for estimating feline renal volume in future clinical and research studies. PMID:23278991
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Accurate pose estimation using single marker single camera calibration system
NASA Astrophysics Data System (ADS)
Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal
2013-03-01
Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
An accurate link correlation estimator for improving wireless protocol performance.
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Accurate and robust estimation of camera parameters using RANSAC
NASA Astrophysics Data System (ADS)
Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He
2013-03-01
Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Robust ODF smoothing for accurate estimation of fiber orientation.
Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter
2010-01-01
Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202
Accurate estimators of correlation functions in Fourier space
NASA Astrophysics Data System (ADS)
Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.
2016-08-01
Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.
Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion
Yadav, Nagesh; Bleakley, Chris
2014-01-01
Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
Methodology for generating waste volume estimates
Miller, J.Q.; Hale, T.; Miller, D.
1991-09-01
This document describes the methodology that will be used to calculate waste volume estimates for site characterization and remedial design/remedial action activities at each of the DOE Field Office, Oak Ridge (DOE-OR) facilities. This standardized methodology is designed to ensure consistency in waste estimating across the various sites and organizations that are involved in environmental restoration activities. The criteria and assumptions that are provided for generating these waste estimates will be implemented across all DOE-OR facilities and are subject to change based on comments received and actual waste volumes measured during future sampling and remediation activities. 7 figs., 8 tabs.
CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY
Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.
2003-02-27
The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume
Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report
Field, Jim G.
2013-03-27
Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure.
Be the Volume: A Classroom Activity to Visualize Volume Estimation
ERIC Educational Resources Information Center
Mikhaylov, Jessica
2011-01-01
A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.
2016-03-01
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?
NASA Astrophysics Data System (ADS)
Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.
2002-12-01
The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
Accurate tempo estimation based on harmonic + noise decomposition
NASA Astrophysics Data System (ADS)
Alonso, Miguel; Richard, Gael; David, Bertrand
2006-12-01
We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Can student health professionals accurately estimate alcohol content in commonly occurring drinks?
Sinclair, Julia; Searle, Emma
2016-01-01
Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344
Bioaccessibility tests accurately estimate bioavailability of lead to quail.
Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S
2016-09-01
Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Hatt, Mathieu; Cheze le Rest, Catherine; Descourt, Patrice; Dekker, Andre; De Ruysscher, Dirk; Oellers, Michel; Lambin, Philippe; Pradier, Olivier; Visvikis, Dimitris
2010-05-01
Purpose: Accurate contouring of positron emission tomography (PET) functional volumes is now considered crucial in image-guided radiotherapy and other oncology applications because the use of functional imaging allows for biological target definition. In addition, the definition of variable uptake regions within the tumor itself may facilitate dose painting for dosimetry optimization. Methods and Materials: Current state-of-the-art algorithms for functional volume segmentation use adaptive thresholding. We developed an approach called fuzzy locally adaptive Bayesian (FLAB), validated on homogeneous objects, and then improved it by allowing the use of up to three tumor classes for the delineation of inhomogeneous tumors (3-FLAB). Simulated and real tumors with histology data containing homogeneous and heterogeneous activity distributions were used to assess the algorithm's accuracy. Results: The new 3-FLAB algorithm is able to extract the overall tumor from the background tissues and delineate variable uptake regions within the tumors, with higher accuracy and robustness compared with adaptive threshold (T{sub bckg}) and fuzzy C-means (FCM). 3-FLAB performed with a mean classification error of less than 9% +- 8% on the simulated tumors, whereas binary-only implementation led to errors of 15% +- 11%. T{sub bckg} and FCM led to mean errors of 20% +- 12% and 17% +- 14%, respectively. 3-FLAB also led to more robust estimation of the maximum diameters of tumors with histology measurements, with <6% standard deviation, whereas binary FLAB, T{sub bckg} and FCM lead to 10%, 12%, and 13%, respectively. Conclusion: These encouraging results warrant further investigation in future studies that will investigate the impact of 3-FLAB in radiotherapy treatment planning, diagnosis, and therapy response evaluation.
Ferguson, R.B.; Baldwin, V.C.
1995-09-01
Estimating tree and stand volume in mature plantations is time consuming, involving much manpower and equipment; however, several sampling and volume-prediction techniques are available. This study showed that a well-constructed, volume-equation method yields estimates comparable to those of the often more time-consuming, hight-accumulation method, even though the latter should be more accurate for any individual tree. Plot volumes were estimated by both methods in a remeasurement of trees in a 40-plot, planted slash pine thinning study. The mean percent age difference in total volume, inside bark, between the two methods ranged from 1 to 2.5 percent across all the plots; differences outside bark ranged from 7 to 10 percent. The results were similar when the effecs of site, plot mean values, or tree-by-tree comparisons were incorporated.
A new geometric-based model to accurately estimate arm and leg inertial estimates.
Wicke, Jason; Dumas, Geneviève A
2014-06-01
Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506
A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Hsu, Andrew T.
1989-01-01
A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.
Automatic contrast phase estimation in CT volumes.
Sofka, Michal; Wu, Dijia; Sühling, Michael; Liu, David; Tietjen, Christian; Soza, Grzegorz; Zhou, S Kevin
2011-01-01
We propose an automatic algorithm for phase labeling that relies on the intensity changes in anatomical regions due to the contrast agent propagation. The regions (specified by aorta, vena cava, liver, and kidneys) are first detected by a robust learning-based discriminative algorithm. The intensities inside each region are then used in multi-class LogitBoost classifiers to independently estimate the contrast phase. Each classifier forms a node in a decision tree which is used to obtain the final phase label. Combining independent classification from multiple regions in a tree has the advantage when one of the region detectors fail or when the phase training example database is imbalanced. We show on a dataset of 1016 volumes that the system correctly classifies native phase in 96.2% of the cases, hepatic dominant phase (92.2%), hepatic venous phase (96.7%), and equilibrium phase (86.4%) in 7 seconds on average. PMID:22003696
Acoustic source inversion to estimate volume flux from volcanic explosions
NASA Astrophysics Data System (ADS)
Kim, Keehoon; Fee, David; Yokoo, Akihiko; Lees, Jonathan M.
2015-07-01
We present an acoustic waveform inversion technique for infrasound data to estimate volume fluxes from volcanic eruptions. Previous inversion techniques have been limited by the use of a 1-D Green's function in a free space or half space, which depends only on the source-receiver distance and neglects volcanic topography. Our method exploits full 3-D Green's functions computed by a numerical method that takes into account realistic topographic scattering. We apply this method to vulcanian eruptions at Sakurajima Volcano, Japan. Our inversion results produce excellent waveform fits to field observations and demonstrate that full 3-D Green's functions are necessary for accurate volume flux inversion. Conventional inversions without consideration of topographic propagation effects may lead to large errors in the source parameter estimate. The presented inversion technique will substantially improve the accuracy of eruption source parameter estimation (cf. mass eruption rate) during volcanic eruptions and provide critical constraints for volcanic eruption dynamics and ash dispersal forecasting for aviation safety. Application of this approach to chemical and nuclear explosions will also provide valuable source information (e.g., the amount of energy released) previously unavailable.
Using GIS to Estimate Lake Volume from Limited Data
Estimates of lake volume are necessary for estimating residence time or modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of ...
Consisitent and Accurate Finite Volume Methods for Coupled Flow and Geomechanics
NASA Astrophysics Data System (ADS)
Nordbotten, J. M.
2014-12-01
We introduce a new class of cell-centered finite volume methods for elasticity and poro-elasticity. As compared to lowest-order finite element discretizations, the new discretization has no additional degrees of freedom, and yet gives more accurate stress and flow fields. This finite volume discretization methods has furthermore the advantage that the mechanical discretization is fully compatible (in terms of grid and variables) with the standard cell-centered finite volume discretizations that are prevailing for commercial simulation of multi-phase flows in porous media. Theoretical analysis proves the convergence of the method. We give results showing that so-called numerical locking is avoided for a large class of structured and unstructured grids. The results are valid in both two and three spatial dimensions. The talk concludes with applications to problems with coupled multi-phase flow, transport and deformation, together with fractured porous media.
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982
Kronenberg, M.W.; Parrish, M.D.; Jenkins, D.W. Jr.; Sandler, M.P.; Friesinger, G.C.
1985-11-01
Estimation of left ventricular end-systolic pressure-volume relations depends on the accurate measurement of small changes in ventricular volume. To study the accuracy of radionuclide ventriculography, paired radionuclide and contrast ventriculograms were obtained in seven dogs during a control period and when blood pressure was increased in increments of 30 mm Hg by phenylephrine infusion. The heart rate was held constant by atropine infusion. The correlation between radionuclide and contrast ventriculography was excellent. The systolic pressure-volume relations were linear for both radionuclide and contrast ventriculography. The mean slope for radionuclide ventriculography was lower than the mean slope for contrast ventriculography; however, the slopes correlated well. The radionuclide-contrast volume relation was compared using background subtraction, attenuation correction, neither of these or both. By each method, radionuclide ventriculography was valid for measuring small changes in left ventricular volume and for defining end-systolic pressure-volume relations.
Cook, Andrea J.; Elmore, Joann G.; Zhu, Weiwei; Jackson, Sara L.; Carney, Patricia A.; Flowers, Chris; Onega, Tracy; Geller, Berta; Rosenberg, Robert D.; Miglioretti, Diana L.
2013-01-01
Objective To determine if U.S. radiologists accurately estimate their own interpretive performance of screening mammography and how they compare their performance to their peers’. Materials and Methods 174 radiologists from six Breast Cancer Surveillance Consortium (BCSC) registries completed a mailed survey between 2005 and 2006. Radiologists’ estimated and actual recall, false positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography were compared. Radiologists’ ratings of their performance as lower, similar, or higher than their peers were compared to their actual performance. Associations with radiologist characteristics were estimated using weighted generalized linear models. The study was approved by the institutional review boards of the participating sites, informed consent was obtained from radiologists, and procedures were HIPAA compliant. Results While most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false positive rate and PPV2 (19% and 26%). Radiologists reported having similar (43%) or lower (31%) recall rates and similar (52%) or lower (33%) false positive rates compared to their peers, and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV2. Estimation accuracy did not differ by radiologists’ characteristics except radiologists who interpret ≤1,000 mammograms annually were less accurate at estimating their recall rates. Conclusion Radiologists perceive their performance to be better than it actually is and at least as good as their peers. Radiologists have particular difficulty estimating their false positive rates and PPV2. PMID:22915414
Comparison Between Multicopter Uav and Total Station for Estimating Stockpile Volumes
NASA Astrophysics Data System (ADS)
Arango, C.; Morales, C. A.
2015-08-01
Currently the UAV (Unmanned Aerial Vehicle) have become an alternative for different engineering applications, especially in surveying, one of these applications is the calculation of volumes of stockpiled material, but there are questions about its accuracy and efficiency, the purpose of this article is to compare traditional surveying methods for estimating total volumes through data obtained by total stations and data obtained by a multicopter UAV. In order to answer these questions we obtain data from the same location and the results were compared. After comparing the results it was found that there was a 2,88% difference between the volume calculated with the total station data and the actual volume, and -0,67% difference between the volume calculated with the UAV data and the actual volume, concluding that the estimated volume with UAV data is more accurate.
Second-order accurate finite volume method for well-driven flows
NASA Astrophysics Data System (ADS)
Dotlić, M.; Vidović, D.; Pokorni, B.; Pušić, M.; Dimkić, M.
2016-02-01
We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address the well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman model. Coupling this correction with a non-linear second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still inconsistent. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.
Estimating Volume of Martian Valleys Using Axelsson Algorithm
NASA Astrophysics Data System (ADS)
Jung, J. H.; Kim, C. J.; Heo, J.; Luo, W.
2012-03-01
A progressive TIN densification algorithm is adapted to estimate the volume martian valley networks (VN) based MOLA point data. This method can be used to estimate the global water inventory associated with VN.
Accurate method to study static volume-pressure relationships in small fetal and neonatal animals.
Suen, H C; Losty, P D; Donahoe, P K; Schnitzer, J J
1994-08-01
We designed an accurate method to study respiratory static volume-pressure relationships in small fetal and neonatal animals on the basis of Archimedes' principle. Our method eliminates the error caused by the compressibility of air (Boyle's law) and is sensitive to a volume change of as little as 1 microliters. Fetal and neonatal rats during the period of rapid lung development from day 19.5 of gestation (term = day 22) to day 3.5 postnatum were studied. The absolute lung volume at a transrespiratory pressure of 30-40 cmH2O increased 28-fold from 0.036 +/- 0.006 (SE) to 0.994 +/- 0.042 ml, the volume per gram of lung increased 14-fold from 0.39 +/- 0.07 to 5.59 +/- 0.66 ml/g, compliance increased 12-fold from 2.3 +/- 0.4 to 27.3 +/- 2.7 microliters/cmH2O, and specific compliance increased 6-fold from 24.9 +/- 4.5 to 152.3 +/- 22.8 microliters.cmH2O-1.g lung-1. This technique, which allowed us to compare changes during late gestation and the early neonatal period in small rodents, can be used to monitor and evaluate pulmonary functional changes after in utero pharmacological therapies in experimentally induced abnormalities such as pulmonary hypoplasia, surfactant deficiency, and congenital diaphragmatic hernia. PMID:8002489
Browning, Sharon R.; Browning, Brian L.
2015-01-01
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
Estimating the volume of Alpine glacial lakes
NASA Astrophysics Data System (ADS)
Cook, S. J.; Quincey, D. J.
2015-09-01
Supraglacial, moraine-dammed and ice-dammed lakes represent a potential glacial lake outburst flood (GLOF) threat to downstream communities in many mountain regions. This has motivated the development of empirical relationships to predict lake volume given a measurement of lake surface area obtained from satellite imagery. Such relationships are based on the notion that lake depth, area and volume scale predictably. We critically evaluate the performance of these existing empirical relationships by examining a global database of measured glacial lake depths, areas and volumes. Results show that lake area and depth are not always well correlated (r2 = 0.38), and that although lake volume and area are well correlated (r2 = 0.91), there are distinct outliers in the dataset. These outliers represent situations where it may not be appropriate to apply existing empirical relationships to predict lake volume, and include growing supraglacial lakes, glaciers that recede into basins with complex overdeepened morphologies or that have been deepened by intense erosion, and lakes formed where glaciers advance across and block a main trunk valley. We use the compiled dataset to develop a conceptual model of how the volumes of supraglacial ponds and lakes, moraine-dammed lakes and ice-dammed lakes should be expected to evolve with increasing area. Although a large amount of bathymetric data exist for moraine-dammed and ice-dammed lakes, we suggest that further measurements of growing supraglacial ponds and lakes are needed to better understand their development.
Accurately measuring volume of soil samples using low cost Kinect 3D scanner
NASA Astrophysics Data System (ADS)
van der Sterre, Boy-Santhos; Hut, Rolf; van de Giesen, Nick
2013-04-01
The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the 150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.
Accurately measuring volume of soil samples using low cost Kinect 3D scanner
NASA Astrophysics Data System (ADS)
van der Sterre, B.; Hut, R.; Van De Giesen, N.
2012-12-01
The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the $150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.
NASA Astrophysics Data System (ADS)
Vizireanu, D. N.; Halunga, S. V.
2012-04-01
A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.
Estimating the volume of Alpine glacial lakes
NASA Astrophysics Data System (ADS)
Cook, S. J.; Quincey, D. J.
2015-12-01
Supraglacial, moraine-dammed and ice-dammed lakes represent a potential glacial lake outburst flood (GLOF) threat to downstream communities in many mountain regions. This has motivated the development of empirical relationships to predict lake volume given a measurement of lake surface area obtained from satellite imagery. Such relationships are based on the notion that lake depth, area and volume scale predictably. We critically evaluate the performance of these existing empirical relationships by examining a global database of glacial lake depths, areas and volumes. Results show that lake area and depth are not always well correlated (r2 = 0.38) and that although lake volume and area are well correlated (r2 = 0.91), and indeed are auto-correlated, there are distinct outliers in the data set. These outliers represent situations where it may not be appropriate to apply existing empirical relationships to predict lake volume and include growing supraglacial lakes, glaciers that recede into basins with complex overdeepened morphologies or that have been deepened by intense erosion and lakes formed where glaciers advance across and block a main trunk valley. We use the compiled data set to develop a conceptual model of how the volumes of supraglacial ponds and lakes, moraine-dammed lakes and ice-dammed lakes should be expected to evolve with increasing area. Although a large amount of bathymetric data exist for moraine-dammed and ice-dammed lakes, we suggest that further measurements of growing supraglacial ponds and lakes are needed to better understand their development.
Using Photogrammetry to Estimate Tank Waste Volumes from Video
Field, Jim G.
2013-03-27
Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Hu, Weigang
2016-01-01
Purpose 4DCT delineated internal target volume (ITV) was applied to determine the tumor motion and used as planning target in treatment planning in lung cancer stereotactic body radiotherapy (SBRT). This work is to study the accuracy of using ITV to predict the real target dose in lung cancer SBRT. Materials and methods Both for phantom and patient cases, the ITV and gross tumor volumes (GTVs) were contoured on the maximum intensity projection (MIP) CT and ten CT phases, respectively. A SBRT plan was designed using ITV as the planning target on average projection (AVG) CT. This plan was copied to each CT phase and the dose distribution was recalculated. The GTV_4D dose was acquired through accumulating the GTV doses over all ten phases and regarded as the real target dose. To analyze the ITV dose error, the ITV dose was compared to the real target dose by endpoints of D99, D95, D1 (doses received by the 99%, 95% and 1% of the target volume), and dose coverage endpoint of V100(relative volume receiving at least the prescription dose). Results The phantom study shows that the ITV underestimates the real target dose by 9.47%∼19.8% in D99, 4.43%∼15.99% in D95, and underestimates the dose coverage by 5% in V100. The patient cases show that the ITV underestimates the real target dose and dose coverage by 3.8%∼10.7% in D99, 4.7%∼7.2% in D95, and 3.96%∼6.59% in V100 in motion target cases. Conclusions Cautions should be taken that ITV is not accurate enough to predict the real target dose in lung cancer SBRT with large tumor motions. Restricting the target motion or reducing the target dose heterogeneity could reduce the ITV dose underestimation effect in lung SBRT. PMID:26968812
Release of magmatic water on Mars: Estimated timing and volumes
NASA Technical Reports Server (NTRS)
Greeley, R.
1987-01-01
By estimating the total amount of water released by volcanic processes on Mars, the abundance of H2O at 10 m was estimated. This value was based on mapping volcanic units, estimating thicknesses and volumes, and using a 10 wt. percent value H2O from terrestrial analogs. By combining such estimates with crater count ages, it is also possible to estimate the timing of water release through Martian history.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.
Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro
2016-01-12
The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696
Reproducibility study for volume estimation in MRI of the brain using the Eigenimage algorithm
NASA Astrophysics Data System (ADS)
Windham, Joe P.; Peck, Donald J.; Soltanian-Zadeh, Hamid
1995-05-01
Accurate and reproducible volume calculations are essential for diagnosis and treatment evaluation for many medical situations. Current techniques employ planimetric methods that are very time consuming to obtain reliable results. The reproducibility and accuracy of these methods depend on the user and the complexity of the volume being measured. We have reported on an algorithm for volume calculation that uses the Eigenimage filter to segment a desired feature from surrounding, interfering features. The pixel intensities of the resulting image have information pertaining to partial volume averaging effects in each voxel preserved thus providing an accurate volume calculation. Also, the amount of time required is significantly reduced, as compared to planimetric methods, and the reproducibility is less user dependent and is independent of the volume shape. In simulations and phantom studies the error in accuracy and reproducibility of this method were less than 2%. The purpose of this study was to determine the reproducibility of the method for volume calculations of the human brain. Ten volunteers were imaged and the volume of white matter, gray matter, and CSF were estimated. The time required to calculate the volume for all three tissues was approximately one minute per slice. The inter- and intra-observer reproducibility errors were less than 5% on average for all volumes calculated. These results were determined to be dependent on the proper selection of the ROIs used to define the tissue signature vectors and the non-uniformity of the MRI system.
Estimating Residual Solids Volume In Underground Storage Tanks
Clark, Jason L.; Worthy, S. Jason; Martin, Bruce A.; Tihey, John R.
2014-01-08
The Savannah River Site liquid waste system consists of multiple facilities to safely receive and store legacy radioactive waste, treat, and permanently dispose waste. The large underground storage tanks and associated equipment, known as the 'tank farms', include a complex interconnected transfer system which includes underground transfer pipelines and ancillary equipment to direct the flow of waste. The waste in the tanks is present in three forms: supernatant, sludge, and salt. The supernatant is a multi-component aqueous mixture, while sludge is a gel-like substance which consists of insoluble solids and entrapped supernatant. The waste from these tanks is retrieved and treated as sludge or salt. The high level (radioactive) fraction of the waste is vitrified into a glass waste form, while the low-level waste is immobilized in a cementitious grout waste form called saltstone. Once the waste is retrieved and processed, the tanks are closed via removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations and severing/sealing external penetrations. The comprehensive liquid waste disposition system, currently managed by Savannah River Remediation, consists of 1) safe storage and retrieval of the waste as it is prepared for permanent disposition; (2) definition of the waste processing techniques utilized to separate the high-level waste fraction/low-level waste fraction; (3) disposition of LLW in saltstone; (4) disposition of the HLW in glass; and (5) closure state of the facilities, including tanks. This paper focuses on determining the effectiveness of waste removal campaigns through monitoring the volume of residual solids in the waste tanks. Volume estimates of the residual solids are performed by creating a map of the residual solids on the waste tank bottom using video and still digital images. The map is then used to calculate the volume of solids remaining in the waste tank. The ability to
Simplified Volume-Area-Depth Method for Estimating Water Storage of Isolated Prairie Wetlands
NASA Astrophysics Data System (ADS)
Minke, A. G.; Westbrook, C. J.; van der Kamp, G.
2009-05-01
There are millions of wetlands in shallow depressions on the North American prairies but the quantity of water stored in these depressions remains poorly understood. Hayashi and van der Kamp (2000) used the relationship between volume (V), area (A) and depth (h) to develop an equation for estimating wetland storage. We tested the robustness of their full and simplified V-A-h methods to accurately estimate volume for the range of wetland shapes occurring across the Prairie Pothole Region. These results were contrasted with two commonly implemented V-A regression equations to determine which method estimates volume most accurately. We used detailed topographic data for 27 wetlands in Smith Creek and St. Denis watersheds, Saskatchewan that ranged in surface area and basin shape. The full V-A-h method was found to accurately estimate storage (errors <3%) across wetlands of various shapes, and is therefore suitable for calculating water storage in the variety of wetland surface shapes found in the prairies. Both V-A equations performed poorly, with volume underestimated by an average of 15% and 50% Analysis of the simplified V-A-h method showed that volume errors of <10% can be achieved if the basin and shape coefficients are derived properly. This would involve measuring depth and area twice, with sufficient time between measurements that the natural fluctuations in water storage are reflected. Practically, wetland area and depth should be measured in spring, following snowmelt when water levels are near the peak, and also in late summer prior to water depths dropping below 10 cm. These guidelines for applying the simplified V-A-h method will allow for accurate volume estimations when detailed topographic data are not available. Since the V-A equations were outperformed by the full and simplified V-A-h methods, we conclude that wetland depth and basin morphology should be considered when estimating volume. This will improve storage estimations of natural and human
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Budget estimates fiscal year 1995: Volume 10
Not Available
1994-02-01
This report contains the Nuclear Regulatory Commission (NRC) fiscal year budget justification to Congress. The budget provides estimates for salaries and expenses and for the Office of the Inspector General for fiscal year 1995. The NRC 1995 budget request is $546,497,000. This is an increase of $11,497,000 above the proposed level for FY 1994. The NRC FY 1995 budget request is 3,218 FTEs. This is a decrease of 75 FTEs below the 1994 proposed level.
Estimating the Volumes of Solid Figures with Curved Surfaces.
ERIC Educational Resources Information Center
Cohen, Donald
1991-01-01
Several examples of solid figures that calculus students can use to exercise their skills at estimating volume are presented. Although these figures are bounded by surfaces that are portions of regular cylinders, it is interesting to note that their volumes can be expressed as rational numbers. (JJK)
Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.
Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide
2003-03-15
Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks. PMID:12680675
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
Quantifying Uncertainties in Tephra Thickness and Volume Estimates
NASA Astrophysics Data System (ADS)
Engwell, S. L.; Aspinall, W.; Sparks, R. S.
2013-12-01
Characterization of explosive volcanic eruptive processes from interpretations of deposits is a key to assessing long-term volcanic hazards and risks, particularly for large explosive eruptions which occur relatively infrequently and others whose deposits, particularly distal deposits, are transient in the geological record. Whilst eruption size - determined by measurement and interpretation of tephra fall deposits - is of particular importance, uncertainties for such measurements and volume estimates are rarely presented. In this study, we quantify the main sources of variance in determining tephra volume from thickness measurements and isopachs in terms of number and spatial distribution of such measurements, using the Fogo A deposit, São Miguel, Azores as an example. Analysis of Fogo A fall deposits show measurement uncertainties are approximately 9 % of measured thickness while uncertainty associated with natural deposit variability ranges between 10 % and 40 % of average thickness, with an average variation of 30 %. Correlations between measurement uncertainties and natural deposit variability are complex and depend on a unit's thickness, position within a succession and distance from source and local topography. The degree to which thickness measurement errors impact on volume uncertainty depends on the number of measurements in a given dataset and their associated individual uncertainties. For Fogo A, the consequent uncertainty in volume associated with thickness measurement uncertainty is 1.3 %, equivalent to a volume uncertainty of 1.5 × 0.02 km3. Uncertainties also arise in producing isopach maps: the spatial relationships between source location and different deposit thicknesses are described by contours subjectively drawn to encompass measurements of a given thickness, generally by eye. Recent advances in volume estimation techniques involve the application of mathematical models directly to tephra thickness data. Here, uncertainties in tephra volumes
Frenning, Göran
2015-01-01
When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975
Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D
2016-06-15
The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790540
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar
NASA Astrophysics Data System (ADS)
Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru
Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness
2015-01-01
Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
NASA Technical Reports Server (NTRS)
Schlosser, Herbert; Ferrante, John
1989-01-01
An accurate analytic expression for the nonlinear change of the volume of a solid as a function of applied pressure is of great interest in high-pressure experimentation. It is found that a two-parameter analytic expression, fits the experimental volume-change data to within a few percent over the entire experimentally attainable pressure range. Results are presented for 24 different materials including metals, ceramic semiconductors, polymers, and ionic and rare-gas solids.
Regularization Based Iterative Point Match Weighting for Accurate Rigid Transformation Estimation.
Liu, Yonghuai; De Dominicis, Luigi; Wei, Baogang; Chen, Liang; Martin, Ralph R
2015-09-01
Feature extraction and matching (FEM) for 3D shapes finds numerous applications in computer graphics and vision for object modeling, retrieval, morphing, and recognition. However, unavoidable incorrect matches lead to inaccurate estimation of the transformation relating different datasets. Inspired by AdaBoost, this paper proposes a novel iterative re-weighting method to tackle the challenging problem of evaluating point matches established by typical FEM methods. Weights are used to indicate the degree of belief that each point match is correct. Our method has three key steps: (i) estimation of the underlying transformation using weighted least squares, (ii) penalty parameter estimation via minimization of the weighted variance of the matching errors, and (iii) weight re-estimation taking into account both matching errors and information learnt in previous iterations. A comparative study, based on real shapes captured by two laser scanners, shows that the proposed method outperforms four other state-of-the-art methods in terms of evaluating point matches between overlapping shapes established by two typical FEM methods, resulting in more accurate estimates of the underlying transformation. This improved transformation can be used to better initialize the iterative closest point algorithm and its variants, making 3D shape registration more likely to succeed. PMID:26357287
Cancilla, John C; Díaz-Rodríguez, Pablo; Matute, Gemma; Torrecilla, José S
2015-02-14
The estimation of the density and refractive index of ternary mixtures comprising the ionic liquid (IL) 1-butyl-3-methylimidazolium tetrafluoroborate, 2-propanol, and water at a fixed temperature of 298.15 K has been attempted through artificial neural networks. The obtained results indicate that the selection of this mathematical approach was a well-suited option. The mean prediction errors obtained, after simulating with a dataset never involved in the training process of the model, were 0.050% and 0.227% for refractive index and density estimation, respectively. These accurate results, which have been attained only using the composition of the dissolutions (mass fractions), imply that, most likely, ternary mixtures similar to the one analyzed, can be easily evaluated utilizing this algorithmic tool. In addition, different chemical processes involving ILs can be monitored precisely, and furthermore, the purity of the compounds in the studied mixtures can be indirectly assessed thanks to the high accuracy of the model. PMID:25583241
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397
Estimating carbon stocks based on forest volume-age relationship
NASA Astrophysics Data System (ADS)
Hangnan, Y.; Lee, W.; Son, Y.; Kwak, D.; Nam, K.; Moonil, K.; Taesung, K.
2012-12-01
This research attempted to estimate potential change of forest carbon stocks between 2010 and 2110 in South Korea, using the forest cover map and National Forest Inventory (NFI) data. Allometric functions (logistic regression models) of volume-age relationships were developed to estimate carbon stock change during upcoming 100 years for Pinus densiflora, Pinus koraiensis, Pinus rigida, Larix kaempferi,and Quercus spp. The current forest volume was estimated with the developed regression model and 4th forest cover map. The future volume was predicted by developed volume-age models with adding n years to current age. As a result, we found that the total forest volume would increase from 126.89 m^3/ha to 246.61 m^3/ha and the carbon stocks would increase from 90.55 Mg C ha^(-1) to 174.62 Mg C ha^(-1) during 100 years when current forest remains unchanged. The carbon stocks would increase by approximately 0.84 Mg C ha^(-1) yr^(-1), which has high value if considering other northern countries' (Canada, Russia, China) -0.10 ~ 0.28 Mg C ha^(-1) yr^(-1) in pervious study. This can be attributed to the fact that mixed forest and bamboo forest in this study did not considered. Moreover, it must be influenced by that the change of carbon stocks was estimated without the consideration of mortality, thinning, and tree species' change in this study. ;
Estimating forest biomass and volume using airborne laser data
NASA Technical Reports Server (NTRS)
Nelson, Ross; Krabill, William; Tonelli, John
1988-01-01
An airborne pulsed laser system was used to obtain canopy height data over a southern pine forest in Georgia in order to predict ground-measured forest biomass and timber volume. Although biomass and volume estimates obtained from the laser data were variable when compared with the corresponding ground measurements site by site, the present models are found to predict mean total tree volume within 2.6 percent of the ground value, and mean biomass within 2.0 percent. The results indicate that species stratification did not consistently improve regression relationships for four southern pine species.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-01
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation
Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J
2009-01-01
Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Stereological estimation of particle shape and orientation from volume tensors.
Rafati, A H; Ziegel, J F; Nyengaard, J R; Jensen, E B Vedel
2016-09-01
In the present paper, we describe new robust methods of estimating cell shape and orientation in 3D from sections. The descriptors of 3D cell shape and orientation are based on volume tensors which are used to construct an ellipsoid, the Miles ellipsoid, approximating the average cell shape and orientation in 3D. The estimators of volume tensors are based on observations in several optical planes through sampled cells. This type of geometric sampling design is known as the optical rotator. The statistical behaviour of the estimator of the Miles ellipsoid is studied under a flexible model for 3D cell shape and orientation. In a simulation study, the lengths of the axes of the Miles ellipsoid can be estimated with coefficients of variation of about 2% if 100 cells are sampled. Finally, we illustrate the use of the developed methods in an example, involving neurons in the medial prefrontal cortex of rat. PMID:26823192
Factors Affecting Prostate Volume Estimation in Computed Tomography Images
Yang, Cheng-Hsiu; Wang, Shyh-Jen; Lin, Alex Tong-Long; Lin, Chao-An
2011-04-01
The aim of this study was to investigate how apex-localizing methods and the computed tomography (CT) slice thickness affected the CT-based prostate volume estimation. Twenty-eight volunteers underwent evaluations of prostate volume by CT, where the contour segmentations were performed by three observers. The bottom of ischial tuberosities (ITs) and the bulb of the penis were used as reference positions to locate the apex, and the distances to the apex were recorded as 1.3 and 2.0 cm, respectively. Interobserver variations to locate ITs and the bulb of the penis were, on average, 0.10 cm (range 0.03-0.38 cm) and 0.30 cm (range 0.00-0.98 cm), respectively. The range of CT slice thickness varied from 0.08-0.48 cm and was adopted to examine the influence of the variation on volume estimation. The volume deviation from the reference case (0.08 cm), which increases in tandem with the slice thickness, was within {+-} 3 cm{sup 3}, regardless of the adopted apex-locating reference positions. In addition, the maximum error of apex identification was 1.5 times of slice thickness. Finally, based on the precise CT films and the methods of apex identification, there were strong positive correlation coefficients for the estimated prostate volume by CT and the transabdominal ultrasonography, as found in the present study (r > 0.87; p < 0.0001), and this was confirmed by Bland-Altman analysis. These results will help to identify factors that affect prostate volume calculation and to contribute to the improved estimation of the prostate volume based on CT images.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
Estimating Volumes of Near-Spherical Molded Artifacts
Gilsinn, David E.; Borchardt, Bruce R.; Tebbe, Amelia
2010-01-01
The Food and Drug Administration (FDA) is conducting research on developing reference lung cancer lesions, called phantoms, to test computed tomography (CT) scanners and their software. FDA loaned two semi-spherical phantoms to the National Institute of Standards and Technology (NIST), called Green and Pink, and asked to have the phantoms’ volumes estimated. This report describes in detail both the metrology and computational methods used to estimate the phantoms’ volumes. Three sets of coordinate measuring machine (CMM) measured data were produced. One set of data involved reference surface data measurements of a known calibrated metal sphere. The other two sets were measurements of the two FDA phantoms at two densities, called the coarse set and the dense set. Two computational approaches were applied to the data. In the first approach spherical models were fit to the calibrated sphere data and to the phantom data. The second approach was to model the data points on the boundaries of the spheres with surface B-splines and then use the Divergence Theorem to estimate the volumes. Fitting a B-spline model to the calibrated sphere data was done as a reference check on the algorithm performance. It gave assurance that the volumes estimated for the phantoms would be meaningful. The results for the coarse and dense data sets tended to predict the volumes as expected and the results did show that the Green phantom was very near spherical. This was confirmed by both computational methods. The spherical model did not fit the Pink phantom as well and the B-spline approach provided a better estimate of the volume in that case. PMID:27134783
High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.
2015-12-01
We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several
Sonographic estimation of pleural fluid volume in dogs.
Newitt, Anna L M; Cripps, Peter J; Shimali, Jerry
2009-01-01
The aim of this study was to find an ultrasonographic method to estimate pleural fluid volume in dogs. Nine canine cadavers of mixed breed were studied. Using a transsternal view, linear measurements from the pleural surface of the midline of the sternebra at the center of the heart to the furthest ventrolateral point of both right and left lung edges were recorded. Isotonic saline was injected using ultrasound guidance into both right and left pleural spaces and the measurements were repeated using standard increments until 1000 ml total volume was reached. No relationship was identified between mean distance and injected volume up to 100 ml. Thereafter, the mean distance increased in an approximately linear relationship with the cube root of fluid volume. There was a high correlation (r > or = 0.899) between the ultrasonographic measurement and fluid volume within individual dogs, but it was not possible to produce a useful equation to calculate absolute pleural fluid volume for new subjects. Nevertheless, ultrasonography may be used to semiquantitatively monitor pleural fluid volume, so that a decrease in the mean linear measurement obtained reflects a decrease in the total fluid volume. PMID:19241761
Soil volume estimation in debris flow areas using lidar data in the 2014 Hiroshima, Japan rainstorm
NASA Astrophysics Data System (ADS)
Miura, H.
2015-10-01
Debris flows triggered by the rainstorm in Hiroshima, Japan on August 20th, 2014 produced extensive damage to the built-up areas in the northern part of Hiroshima city. In order to consider various emergency response activities and early-stage recovery planning, it is important to evaluate the distribution of the soil volumes in the debris flow areas immediately after the disaster. In this study, automated nonlinear mapping technique is applied to light detection and ranging (LiDAR)-derived digital elevation models (DEMs) observed before and after the disaster to quickly and accurately correct geometric locational errors of the data. The soil volumes generated from the debris flows are estimated by subtracting the pre- and post-event DEMs. The geomorphologic characteristics in the debris flow areas are discussed from the distribution of the estimated soil volumes.
Estimation of myocardial volume at risk from CT angiography
NASA Astrophysics Data System (ADS)
Zhu, Liangjia; Gao, Yi; Mohan, Vandana; Stillman, Arthur; Faber, Tracy; Tannenbaum, Allen
2011-03-01
The determination of myocardial volume at risk distal to coronary stenosis provides important information for prognosis and treatment of coronary artery disease. In this paper, we present a novel computational framework for estimating the myocardial volume at risk in computed tomography angiography (CTA) imagery. Initially, epicardial and endocardial surfaces, and coronary arteries are extracted using an active contour method. Then, the extracted coronary arteries are projected onto the epicardial surface, and each point on this surface is associated with its closest coronary artery using the geodesic distance measurement. The likely myocardial region at risk on the epicardial surface caused by a stenosis is approximated by the region in which all its inner points are associated with the sub-branches distal to the stenosis on the coronary artery tree. Finally, the likely myocardial volume at risk is approximated by the volume in between the region at risk on the epicardial surface and its projection on the endocardial surface, which is expected to yield computational savings over risk volume estimation using the entire image volume. Furthermore, we expect increased accuracy since, as compared to prior work using the Euclidean distance, we employ the geodesic distance in this work. The experimental results demonstrate the effectiveness of the proposed approach on pig heart CTA datasets.
Volume estimation of multidensity nodules with thoracic computed tomography.
Gavrielides, Marios A; Li, Qin; Zeng, Rongping; Myers, Kyle J; Sahiner, Berkman; Petrick, Nicholas
2016-01-01
This work focuses on volume estimation of "multidensity" lung nodules in a phantom computed tomography study. Eight objects were manufactured by enclosing spherical cores within larger spheres of double the diameter but with a different density. Different combinations of outer-shell/inner-core diameters and densities were created. The nodules were placed within an anthropomorphic phantom and scanned with various acquisition and reconstruction parameters. The volumes of the entire multidensity object as well as the inner core of the object were estimated using a model-based volume estimator. Results showed percent volume bias across all nodules and imaging protocols with slice thicknesses [Formula: see text] ranging from [Formula: see text] to 6.6% for the entire object (standard deviation ranged from 1.5% to 7.6%), and within [Formula: see text] to 5.7% for the inner-core measurement (standard deviation ranged from 2.0% to 17.7%). Overall, the estimation error was larger for the inner-core measurements, which was expected due to the smaller size of the core. Reconstructed slice thickness was found to substantially affect volumetric error for both tasks; exposure and reconstruction kernel were not. These findings provide information for understanding uncertainty in volumetry of nodules that include multiple densities such as ground glass opacities with a solid component. PMID:26844235
Improved surface volume estimates for surface irrigation balance calculations
Technology Transfer Automated Retrieval System (TEKTRAN)
Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. Typically, these calculations use the Manning formula and normal depth assumption to calculate upstream flow depth (and thus flow area), and a constant shape factor to describe the rela...
Accurate estimation of the RMS emittance from single current amplifier data
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-05-31
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A
2016-05-01
The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available. PMID:27013261
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759
ERIC Educational Resources Information Center
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
Quick and accurate estimation of the elastic constants using the minimum image method
NASA Astrophysics Data System (ADS)
Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.
2015-04-01
A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.
Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks
NASA Astrophysics Data System (ADS)
Bouchaala, F.; Ali, M. Y.
2014-12-01
The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering
Rain Volume Estimation over Areas Using Satellite and Radar Data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Miller, J. R., Jr.; Johnson, L. R.; Vonderhaar, T. H.; Laybe, P.
1984-01-01
The application of satellite data to a recently developed radar technique used to estimate convective rain volumes over areas on a dry environment (the northern Great Plains) is discussed. The area time integral technique (ATI) provides a means of estimating total rain volumes over fixed and floating target areas of the order of 1,000 to 100,000 km(2) for clusters lasting 40 min. The basis of the method is the existence of a strong correlation between the area coverage integrated over the lifetime of the storm (ATI) and the rain volume. One key element in this technique is that it does not require the consideration of the structure of the radar intensities inside the area coverage to generate rain volumes, but only considers the rain event per se. This fact might reduce or eliminate some sources of error in applying the technique to satellite data. The second key element is that the ATI once determined can be converted to total rain volume by using a constant factor (average rain rate) for a given locale.
Estimating flood hydrographs and volumes for Alabama streams
Olin, D.A.; Atkins, J.B.
1988-01-01
The hydraulic design of highway drainage structures involves an evaluation of the effect of the proposed highway structures on lives, property, and stream stability. Flood hydrographs and associated flood volumes are useful tools in evaluating these effects. For design purposes, the Alabama Highway Department needs information on flood hydrographs and volumes associated with flood peaks of specific recurrence intervals (design floods) at proposed or existing bridge crossings. This report will provide the engineer with a method to estimate flood hydrographs, volumes, and lagtimes for rural and urban streams in Alabama with drainage areas less than 500 sq mi. Existing computer programs and methods to estimate flood hydrographs and volumes for ungaged streams have been developed in Georgia. These computer programs and methods were applied to streams in Alabama. The report gives detailed instructions on how to estimate flood hydrographs for ungaged rural or urban streams in Alabama with drainage areas less than 500 sq mi, without significant in-channel storage or regulations. (USGS)
Magma generation on Mars: Estimated volumes through time
NASA Technical Reports Server (NTRS)
Greeley, Ronald; Schneid, B.
1991-01-01
Images of volcanoes and lava flows, chemical analysis by the Viking landers, and studies of meteorites show that volcanism has played an important role in the evolution of Mars. Photogeologic mapping suggests that half of Mars' surface is covered with volcanic materials. Here, researchers present results from new mappings, including estimates of volcanic deposit thicknesses based on partly buried and buried impact craters using the technique of DeHon. The researchers infer the volumes of possible associated plutonic rocks and derive the volumes of magmas on Mars generated in its post-crustal formation history. Also considered is the amount of juvenile water that might have exsolved from the magma through time.
Source parameter estimation in inhomogeneous volume conductors of arbitrary shape.
Oostendorp, T F; van Oosterom, A
1989-03-01
In this paper it is demonstrated that the use of a direct matrix inverse in the solution of the forward problem in volume conduction problems greatly facilitates the application of standard, nonlinear parameter estimation procedures for finding the strength as well as the location of current sources inside an inhomogeneous volume conductor of arbitrary shape from potential measurements at the outer surface (inverse procedure). This, in turn, facilitates the inclusion of a priori constraints. Where possible, the performance of the method is compared to that of the Gabor-Nelson method. Applications are in the fields of bioelectricity (e.g., electrocardiography and electroencephalography). PMID:2921073
Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G
2015-08-01
Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV. PMID:26736434
Ultrasound Fetal Weight Estimation: How Accurate Are We Now Under Emergency Conditions?
Dimassi, Kaouther; Douik, Fatma; Ajroudi, Mariem; Triki, Amel; Gara, Mohamed Faouzi
2015-10-01
The primary aim of this study was to evaluate the accuracy of sonographic estimation of fetal weight when performed at due date by first-line sonographers. This was a prospective study including 500 singleton pregnancies. Ultrasound examinations were performed by residents on delivery day. Estimated fetal weights (EFWs) were calculated and compared with the corresponding birth weights. The median absolute difference between EFW and birth weight was 200 g (100-330). This difference was within ±10% in 75.2% of the cases. The median absolute percentage error was 5.53% (2.70%-10.03%). Linear regression analysis revealed a good correlation between EFW and birth weight (r = 0.79, p < 0.0001). According to Bland-Altman analysis, bias was -85.06 g (95% limits of agreement: -663.33 to 494.21). In conclusion, EFWs calculated by residents were as accurate as those calculated by experienced sonographers. Nevertheless, predictive performance remains limited, with a low sensitivity in the diagnosis of macrosomia. PMID:26164286
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
Correlations estimate volume distilled using gravity, boiling point
Moreno, A.; Consuelo Perez de Alba, M. del; Manriquez, L.; Guardia Mendoz, P. de la
1995-10-23
Mathematical nd graphic correlations have been developed for estimating cumulative volume distilled as a function of crude API gravity and true boiling point (TBP). The correlations can be used for crudes with gravities of 21--34{degree} API and boiling points of 150--540 C. In distillation predictions for several mexican and Iraqi crude oils, the correlations have exhibited accuracy comparable to that of laboratory measurements. The paper discusses the need for such a correlation and the testing of the correlation.
NASA Astrophysics Data System (ADS)
Mittag, Anja; Lenz, Dominik; Smith, Paul J.; Pach, Susanne; Tarnok, Attila
2005-04-01
Aim: In patients, e.g. with congenital heart diseases, a differential blood count is needed for diagnosis. To this end by standard automatic analyzers 500 μl of blood is required from the patients. In case of newborns and infants this is a substantial volume, especially after operations associated with blood loss. Therefore, aim of this study was to develop a method to determine a differential blood picture with a substantially reduced specimen volume. Methods: To generate a differential blood picture 10 μl EDTA blood were mixed with 10 μl of a DRAQ5 solution (500μM, Biostatus) and 10 μl of an antibody mixture (CD45-FITC, CD14-PE, diluted with PBS). 20 μl of this cell suspension was filled into a Neubauer counting chamber. Due to the defined volume of the chamber it is possible to determine the cell count per volume. The trigger for leukocyte counting was set on DRAQ5 signal in order to be able to distinguish nucleated white blood cells from erythrocytes. Different leukocyte subsets could be distinguished due to the used fluorescence labeled antibodies. For erythrocyte counting cell suspension was diluted another 150 times. 20 μl of this dilution was analyzed in a microchamber by LSC with trigger set on forward scatter signal. Results: This method allows a substantial decrease of blood sample volume for generation of a differential blood picture (10 μl instead of 500μl). There was a high correlation between our method and the results of routine laboratory (r2=0.96, p<0.0001 n=40). For all parameters intra-assay variance was less than 7 %. Conclusions: In patients with low blood volume such as neonates and in critically ill infants every effort has to be taken to reduce the blood volume needed for diagnostics. With this method only 2% of standard sample volume is needed to generate a differential blood picture. Costs are below that of routine laboratory. We suggest this method to be established in paediatric cardiology for routine diagnostics and for
Uncertainties in peat volume and soil carbon estimated using ground penetrating radar and probing
Parsekian, Andrew D.; Slater, Lee; Ntarlagiannis, Dimitrios; Nolan, James; Sebestyen, Stephen D; Kolka, Randall K; Hanson, Paul J
2012-01-01
We evaluate the uncertainty in calculations of peat basin volume using high-resolution data . to resolve the three-dimensional structure of a peat basin using both direct (push probes) and indirect geophysical (ground penetrating radar) measurements. We compared volumetric estimates from both approaches with values from literature. We identified subsurface features that can introduce uncertainties into direct peat thickness measurements including the presence of woody peat and soft clay or gyttja. We demonstrate that a simple geophysical technique that is easily scalable to larger peatlands can be used to rapidly and cost effectively obtain more accurate and less uncertain estimates of peat basin volumes critical to improving understanding of the total terrestrial carbon pool in peatlands.
Araki, Tadashi; Banchhor, Sumit K; Londhe, Narendra D; Ikeda, Nobutaka; Radeva, Petia; Shukla, Devarshi; Saba, Luca; Balestrieri, Antonella; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Suri, Jasjit S
2016-03-01
Quantitative assessment of calcified atherosclerotic volume within the coronary artery wall is vital for cardiac interventional procedures. The goal of this study is to automatically measure the calcium volume, given the borders of coronary vessel wall for all the frames of the intravascular ultrasound (IVUS) video. Three soft computing fuzzy classification techniques were adapted namely Fuzzy c-Means (FCM), K-means, and Hidden Markov Random Field (HMRF) for automated segmentation of calcium regions and volume computation. These methods were benchmarked against previously developed threshold-based method. IVUS image data sets (around 30,600 IVUS frames) from 15 patients were collected using 40 MHz IVUS catheter (Atlantis® SR Pro, Boston Scientific®, pullback speed of 0.5 mm/s). Calcium mean volume for FCM, K-means, HMRF and threshold-based method were 37.84 ± 17.38 mm(3), 27.79 ± 10.94 mm(3), 46.44 ± 19.13 mm(3) and 35.92 ± 16.44 mm(3) respectively. Cross-correlation, Jaccard Index and Dice Similarity were highest between FCM and threshold-based method: 0.99, 0.92 ± 0.02 and 0.95 + 0.02 respectively. Student's t-test, z-test and Wilcoxon-test are also performed to demonstrate consistency, reliability and accuracy of the results. Given the vessel wall region, the system reliably and automatically measures the calcium volume in IVUS videos. Further, we validated our system against a trained expert using scoring: K-means showed the best performance with an accuracy of 92.80%. Out procedure and protocol is along the line with method previously published clinically. PMID:26643081
Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)
1995-01-01
It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.
Estimation of the standard molal heat capacities, entropies and volumes of 2:1 clay minerals
NASA Astrophysics Data System (ADS)
Ransom, Barbara; Helgeson, Harold C.
1994-11-01
The dearth of accurate values of the thermodynamic properties of 2:1 clay minerals severely hampers interpretation of their phase relations, the design of critical laboratory experiments and geologically realistic computer calculations of mass transfer in weathering, diagenetic and hydrothermal systems. Algorithms and strategies are described below for estimating to within 2% the standard molal heat capacities, entropies, and volumes of illites, smectites and other 2:1 clay minerals. These techniques can also be used to estimate standard molal thermodynamic properties of fictive endmembers of clay mineral solid solutions. Because 2:1 clay minerals like smectite and vermiculite are always hydrated to some extent in nature, contribution of interlayer H 2O to their thermodynamic properties is considered explicitly in the estimation of the standard molal heat capacities, entropies, and volumes of these minerals. Owing to the lack of accurate calorimetric data from which reliable values of the standard molal heat capacity and entropy of interlayer H 2O can be retrieved, these properties were taken in a first approximation to be equal to those of zeolitic H 2O in analcite. The resulting thermodynamic contributions per mole of interlayer H 2O to the standard molal heat capacity, entropy, and volume of hydrous clay minerals at 1 bar and 25°C are 11.46 cal mol -1, 13.15 cal mol -1 K -1 and 17.22 cm 3 mol, respectively. Estimated standard molal heat capacities, entropies and volumes are given for a suite of smectites and illites commonly used in models of clay mineral and shale diagenesis.
Simple and accurate empirical absolute volume calibration of a multi-sensor fringe projection system
NASA Astrophysics Data System (ADS)
Gdeisat, Munther; Qudeisat, Mohammad; AlSa`d, Mohammed; Burton, David; Lilley, Francis; Ammous, Marwan M. M.
2016-05-01
This paper suggests a novel absolute empirical calibration method for a multi-sensor fringe projection system. The optical setup of the projector-camera sensor can be arbitrary. The term absolute calibration here means that the centre of the three dimensional coordinates in the resultant calibrated volume coincides with a preset centre to the three-dimensional real-world coordinate system. The use of a zero-phase fringe marking spot is proposed to increase depth calibration accuracy, where the spot centre is determined with sub-pixel accuracy. Also, a new method is proposed for transversal calibration. Depth and transversal calibration methods have been tested using both single sensor and three-sensor fringe projection systems. The standard deviation of the error produced by this system is 0.25 mm. The calibrated volume produced by this method is 400 mm×400 mm×140 mm.
NASA Astrophysics Data System (ADS)
Browning, J.; Drymoni, K.; Gudmundsson, A.
2015-12-01
An understanding of the amount of magma available to supply any given eruption is useful for determining the potential eruption magnitude and duration. Geodetic measurements and inversion techniques are often used to constrain volume changes within magma chambers, as well as constrain location and depth, but such models are incapable of calculating total magma storage. For example, during the 2012 unrest period at Santorini volcano, approximately 0.021 km3 of new magma entered a shallow chamber residing at around 4 km below the surface. This type of event is not unusual, and is in fact a necessary condition for the formation of a long-lived shallow chamber, of which Santorini must possess. The period of unrest ended without culminating in eruption, i.e the amount of magma which entered the chamber was insufficient to break the chamber and force magma further towards the surface. We combine previously published data on the volume of recent eruptions at Santorini together with geodetic measurements. Measurements of dykes within the caldera wall provide an estimate of the volume of magma transported during eruptions, assuming the dyke does not become arrested. When the combined volume of a dyke and eruption are known (Ve) they can be used to estimate using fracture mechanics principles and poro-elastic constraints the size of an underlying shallow magma chamber. We present field measurements of dykes within Santorini caldera and provide an analytical method to estimate the volume of magma contained underneath Santorini caldera. In addition we postulate the potential volume of magma required as input from deeper sources to switch the shallow magma chamber from an equilibrium setting to one where the pressure inside the chamber exceeds the surrounding host rocks tensile strength, a condition necessary to form a dyke and a possible eruption.
Köhler, Christian; Recht, Raphaël; Quinternet, Marc; de Lamotte, Frederic; Delsuc, Marc-André; Kieffer, Bruno
2015-01-01
NMR spectroscopy allows measurements of very accurate values of equilibrium dissociation constants using chemical shift perturbation methods, provided that the concentrations of the binding partners are known with high precision and accuracy. The accuracy and precision of these experiments are improved if performed using individual capillary tubes, a method enabling full automation of the measurement. We provide here a protocol to set up and perform these experiments as well as a robust method to measure peptide concentrations using tryptophan as an internal standard. PMID:25749962
Gas Flaring Volume Estimates with Multiple Satellite Observations
NASA Astrophysics Data System (ADS)
Ziskin, D. C.; Elvidge, C.; Baugh, K.; Ghosh, T.; Hsu, F. C.
2010-12-01
Flammable gases (primarily methane) are a common bi-product associated with oil wells. Where there is no infrastructure to use the gas or bring it to market, the gases are typically flared off. This practice is more common at remote sites, such as an offshore drilling platform. The Defense Meteorological Satellite Program (DMSP) is a series of satellites with a low-light imager called the Operational Linescan System (OLS). The OLS, which detects the flares at night, has been a valuable tool in the estimation of flared gas volume [Elvidge et al, 2009]. The use of the Moderate Resolution Imaging Spectroradiometer (MODIS) fire product has been processed to create products suitable for an independent estimate of gas flaring on land. We are presenting the MODIS flare product, the results of our MODIS gas flare volume analysis, and independent validation of the published DMSP estimates. Elvidge, C. D., Ziskin, D., Baugh, K. E., Tuttle, B. T., Ghosh, T., Pack, D. W., Erwin, E. H., Zhizhin, M., 2009, "A Fifteen Year Record of Global Natural Gas Flaring Derived from Satellite Data", Energies, 2 (3), 595-622
Benchmarking of a New Finite Volume Shallow Water Code for Accurate Tsunami Modelling
NASA Astrophysics Data System (ADS)
Reis, Claudia; Clain, Stephane; Figueiredo, Jorge; Baptista, Maria Ana; Miranda, Jorge Miguel
2015-04-01
Finite volume methods used to solve the shallow-water equation with source terms receive great attention on the two last decades due to its fundamental properties: the built-in conservation property, the capacity to treat correctly discontinuities and the ability to handle complex bathymetry configurations preserving the some steady-state configuration (well-balanced scheme). Nevertheless, it is still a challenge to build an efficient numerical scheme, with very few numerical artifacts (e.g. numerical diffusion) which can be used in an operational environment, and are able to better capture the dynamics of the wet-dry interface and the physical phenomenon that occur in the inundation area. We present here a new finite volume code and benchmark it against analytical and experimental results, and we test the performance of the code in the complex topographic of the Tagus Estuary, close to Lisbon, Portugal. This work is funded by the Portugal-France research agreement, through the research project FCT-ANR/MAT-NAN/0122/2012.
NASA Astrophysics Data System (ADS)
Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.
2015-06-01
A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.
Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza
2016-01-01
Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that
New model to estimate mean blood pressure by heart rate with stroke volume changing influence.
Al-Jaafreh, Moha'med O; Al-Jumaily, Adel A
2006-01-01
Mean blood pressure (MBP) has high correlation with heart rate (HR), but such e relationship between them is ambiguous and nonlinear. This paper investigates establishing an accurate mathematical model to estimate MBP that is considering the influence of the stroke volume changing. Twenty three cases of MIMIC database till are employed; 12 cases for training and 11 cases for verification. The mean and standard deviation for all cases are calculated and compared with real results. Our suggested mathematical model achieved an encouragement results. PMID:17946482
Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.
2014-01-01
Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022
Effect of Volume-of-Interest Misregistration on Quantitative Planar Activity and Dose Estimation
Song, N.; He, B.; Frey, E. C.
2010-01-01
In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The Quantitative Planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter, and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The goal of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Estimating mean glomerular volume using two arbitrary parallel sections.
Najafian, Behzad; Basgen, John M; Mauer, Michael
2002-11-01
The most reliable method for estimation of mean glomerular volume (MGV), the disector/Cavalieri method, is technically demanding and time consuming. Other methods suffer either from a lack of precise correlation with the gold standard or from the need for a large number of glomeruli in the sample. Here, a new method (the 2-profile method) is described; it provides a reliable estimate of MGV by measuring the profile area of glomeruli in two arbitrary parallel sections. MGV was estimated in renal biopsies from 16 diabetic patients and 13 normal subjects using both the Cavalieri and the 2-profile methods. The range of individual glomerular volumes based on the Cavalieri measurements was 0.31 to 4.02 x10(6) micro m(3). There was a high correlation between the two methods for MGV (r = 0.97; P < 0.0001). However, the 2-profile method systematically overestimated MGV (P = 0.0005, paired t test). This overestimation was corrected by introducing a multiplication factor of 0.91, after which statistical criteria of interchangeability with the Cavalieri method were met. The optimal distance between two sections was determined as 20 micro m with a coefficient of variation of 7.4% in repeated measurements of MGV. On the basis of findings that values for MGV stabilize after ten glomeruli are measured by the disector/Cavalieri method, it was determined that the accuracy of MGV by the 2-profile method obtained by eight glomeruli was less than 7% different from ten in all cases. Thus, the 2-profile method is a practical alternative to the disector/Cavalieri method for estimating MGV, especially in small samples and blocks with limited residual tissue. PMID:12397039
Estimating pore and cement volumes in thin section
Halley, R.B.
1978-01-01
Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.
Running interval training and estimated plasma-volume variation.
Ben Abderrahman, Abderraouf; Prioux, Jacques; Chamari, Karim; Ben Ounis, Omar; Tabka, Zouhair; Zouhal, Hassane
2013-07-01
The effect of endurance interval training (IT) on hematocrit (Ht), hemoglobin (Hb), and estimated plasma- volume variation (PVV) in response to maximal exercise was studied in 15 male subjects (21.1 ± 1.1 y; control group n = 6, and training group, n = 9). The training group participated in interval training 3 times a week for 7 wk. A maximal graded test (GXT) was performed to determine maximal aerobic power (MAP) and maximal aerobic speed (MAS) both before and after the training program. To determine Ht, Hb concentration, and lactate concentrations, blood was collected at rest, at the end of GXT, and after 10 and 30 min of recovery. MAP and MAS increased significantly (P < .05) after training only in training group. Hematocrit determined at rest was significantly lower in the training group than in the control group after the training period (P < .05). IT induced a significant increase of estimated PVV at rest for training group (P < .05), whereas there were no changes for control group. Hence, significant relationships were observed after training between PVV deter- mined at the end of the maximal test and MAS (r = .60, P < .05) and MAP (r = .76, P < .05) only for training group. In conclusion, 7 wk of IT led to a significant increase in plasma volume that possibly contributed to the observed increase of aerobic fitness (MAP and MAS). PMID:23113934
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
Volume Estimation Using Food Specific Shape Templates in Mobile Image-Based Dietary Assessment.
Chae, Junghoon; Woo, Insoo; Kim, Sungye; Maciejewski, Ross; Zhu, Fengging; Delp, Edward J; Boushey, Carol J; Ebert, David S
2011-02-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume. PMID:22025936
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.
2007-09-01
BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.
Tumor Volume Estimation and Quasi-Continuous Administration for Most Effective Bevacizumab Therapy
Sápi, Johanna; Kovács, Levente; Drexler, Dániel András; Kocsis, Pál; Gajári, Dávid; Sápi, Zoltán
2015-01-01
Background Bevacizumab is an exogenous inhibitor which inhibits the biological activity of human VEGF. Several studies have investigated the effectiveness of bevacizumab therapy according to different cancer types but these days there is an intense debate on its utility. We have investigated different methods to find the best tumor volume estimation since it creates the possibility for precise and effective drug administration with a much lower dose than in the protocol. Materials and Methods We have examined C38 mouse colon adenocarcinoma and HT-29 human colorectal adenocarcinoma. In both cases, three groups were compared in the experiments. The first group did not receive therapy, the second group received one 200 μg bevacizumab dose for a treatment period (protocol-based therapy), and the third group received 1.1 μg bevacizumab every day (quasi-continuous therapy). Tumor volume measurement was performed by digital caliper and small animal MRI. The mathematical relationship between MRI-measured tumor volume and mass was investigated to estimate accurate tumor volume using caliper-measured data. A two-dimensional mathematical model was applied for tumor volume evaluation, and tumor- and therapy-specific constants were calculated for the three different groups. The effectiveness of bevacizumab administration was examined by statistical analysis. Results In the case of C38 adenocarcinoma, protocol-based treatment did not result in significantly smaller tumor volume compared to the no treatment group; however, there was a significant difference between untreated mice and mice who received quasi-continuous therapy (p = 0.002). In the case of HT-29 adenocarcinoma, the daily treatment with one-twelfth total dose resulted in significantly smaller tumors than the protocol-based treatment (p = 0.038). When the tumor has a symmetrical, solid closed shape (typically without treatment), volume can be evaluated accurately from caliper-measured data with the applied two
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
An investigation of the feasibility of rain volume estimation using satellite data following a technique recently developed with radar data called the Arera Time Integral was undertaken. Case studies were selected on the basis of existing radar and satellite data sets which match in space and time. Four multicell clusters were analyzed. Routines for navigation remapping amd smoothing of satellite images were performed. Visible counts were normalized for solar zenith angle. A radar sector of interest was defined to delineate specific radar echo clusters for each radar time throughout the radar echo cluster lifetime. A satellite sector of interest was defined by applying small adjustments to the radar sector using a manual processing technique. The radar echo area, the IR maximum counts and the IR counts matching radar echo areas were found to evolve similarly, except for the decaying phase of the cluster where the cirrus debris keeps the IR counts high.
Volume estimation of brain abnormalities in MRI data
NASA Astrophysics Data System (ADS)
Suprijadi, Pratama, S. H.; Haryanto, F.
2014-02-01
The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
The feasibility of rain volume estimation over fixed and floating areas was investigated using rapid scan satellite data following a technique recently developed with radar data, called the Area Time Integral (ATI) technique. The radar and rapid scan GOES satellite data were collected during the Cooperative Convective Precipitation Experiment (CCOPE) and North Dakota Cloud Modification Project (NDCMP). Six multicell clusters and cells were analyzed to the present time. A two-cycle oscillation emphasizing the multicell character of the clusters is demonstrated. Three clusters were selected on each day, 12 June and 2 July. The 12 June clusters occurred during the daytime, while the 2 July clusters during the nighttime. A total of 86 time steps of radar and 79 time steps of satellite images were analyzed. There were approximately 12-min time intervals between radar scans on the average.
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
Technology Transfer Automated Retrieval System (TEKTRAN)
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
Unmanned Aerial Vehicle Use for Wood Chips Pile Volume Estimation
NASA Astrophysics Data System (ADS)
Mokroš, M.; Tabačák, M.; Lieskovský, M.; Fabrika, M.
2016-06-01
The rapid development of unmanned aerial vehicles is a challenge for applied research. Many technologies are developed and then researcher are looking up for their application in different sectors. Therefore, we decided to verify the use of the unmanned aerial vehicle for wood chips pile monitoring. We compared the use of GNSS device and unmanned aerial vehicle for volume estimation of four wood chips piles. We used DJI Phantom 3 Professional with the built-in camera and GNSS device (geoexplorer 6000). We used Agisoft photoscan for processing photos and ArcGIS for processing points. Volumes calculated from pictures were not statistically significantly different from amounts calculated from GNSS data and high correlation between them was found (p = 0.9993). We conclude that the use of unmanned aerial vehicle instead of the GNSS device does not lead to significantly different results. Tthe data collection consumed from almost 12 to 20 times less time with the use of UAV. Additionally, UAV provides documentation trough orthomosaic.
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461
Technical note: tree truthing: how accurate are substrate estimates in primate field studies?
Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J
2012-04-01
Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099
Moore, James; Hays, David; Quinn, John; Johnson, Robert; Durham, Lisa
2013-07-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)
NASA Astrophysics Data System (ADS)
Ivey, Christopher B.; Moin, Parviz
2015-11-01
This paper presents a framework for extending the height-function technique for the calculation of interface normals and curvatures to unstructured non-convex polyhedral meshes with application to the piecewise-linear interface calculation volume-of-fluid method. The methodology is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh, requiring a set of data structures and algorithms for non-convex polyhedral operations: truncation of a polyhedron by a plane, intersection of two polyhedra, joining of two convex polyhedra, volume enforcement of a polyhedron by a plane, and volume fraction initialization by a signed-distance function. By leveraging these geometric tools, a geometric interpolation strategy for embedding structured height-function stencils in unstructured meshes is developed. The embedded height-function technique is tested on surfaces with known interface normals and curvatures, namely cylinder, sphere, and ellipsoid. Tests are performed on the median duals of a uniform cartesian mesh, a wedge mesh, and a tetrahedral mesh, and comparisons are made with conventional methods. Across the tests, the embedded height-function technique outperforms contemporary methods and its accuracy approaches the accuracy that the traditional height-function technique exemplifies on uniform cartesian meshes.
A quantitative method for estimation of volume changes in arachnoid foveae with age.
Duray, Stephen M; Martel, Stacie S
2006-03-01
Age-related changes of arachnoid foveae have been described, but objective, quantitative analyses are lacking. A new quantitative method is presented for estimation of change in total volume of arachnoid foveae with age. The pilot sample consisted of nine skulls from the Palmer Anatomy Laboratory. Arachnoid foveae were filled with sand, which was extracted using a vacuum pump. Mass was determined with an analytical balance and converted to volume. A reliability analysis was performed using intraclass correlation coefficients. The method was found to be highly reliable (intraobserver ICC = 0.9935, interobserver ICC = 0.9878). The relationship between total volume and age was then examined in a sample of 63 males of accurately known age from the Hamann-Todd collection. Linear regression analysis revealed no statistically significant relationship between total volume and age, or foveae frequency and age (alpha = 0.05). Development of arachnoid foveae may be influenced by health factors, which could limit its usefulness in aging. PMID:16566755
Combining high-resolution satellite images and altimetry to estimate the volume of small lakes
NASA Astrophysics Data System (ADS)
Baup, F.; Frappart, F.; Maubant, J.
2013-12-01
This study presents an approach to determine the volume of water in small lakes (<100 ha) by combining satellite altimetry data and high-resolution (HR) images. The lake being studied is located in the south-west of France and is only used for agricultural irrigation purposes. The altimetry satellite data are provided by RA-2 sensor on board Envisat, and the high-resolution images (<10 m) are obtained from optical (Formosat-2) and synthetic aperture radar (SAR) sensors (Terrasar-X and Radarsat-2) satellites. The altimetry data (data are obtained every 35 days) and the HR images (45) have been available since 2003 and 2010, respectively. In situ data (for the water levels and volumes) going back to 2003 have been provided by the manager of the lake. Three independent approaches are developed to estimate the lake volume and its temporal variability. The first two approaches are empirical and use synchronous ground measurements of the water volume and the satellite data. The results demonstrate that altimetry and imagery can be effectively and accurately used to monitor the temporal variations of the lake (R2altimetry = 0.97, RMSEaltimetry = 5.2%, R2imagery = 0.90, and RMSEimagery = 7.4%). The third method combines altimetry (to measure the lake level) and satellite images (of the lake surface) to estimate the volume changes of the lake and produces the best results (R2 = 0.99) of the three methods, demonstrating the potential of future Sentinel and SWOT missions to monitor small lakes and reservoirs for agricultural and irrigation applications.
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations. PMID:27109554
Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2016-06-01
The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
NASA Astrophysics Data System (ADS)
Crouch, Stephen; Kaylor, Brant M.; Barber, Zeb W.; Reibel, Randy R.
2015-09-01
Currently large volume, high accuracy three-dimensional (3D) metrology is dominated by laser trackers, which typically utilize a laser scanner and cooperative reflector to estimate points on a given surface. The dependency upon the placement of cooperative targets dramatically inhibits the speed at which metrology can be conducted. To increase speed, laser scanners or structured illumination systems can be used directly on the surface of interest. Both approaches are restricted in their axial and lateral resolution at longer stand-off distances due to the diffraction limit of the optics used. Holographic aperture ladar (HAL) and synthetic aperture ladar (SAL) can enhance the lateral resolution of an imaging system by synthesizing much larger apertures by digitally combining measurements from multiple smaller apertures. Both of these approaches only produce two-dimensional imagery and are therefore not suitable for large volume 3D metrology. We combined the SAL and HAL approaches to create a swept frequency digital holographic 3D imaging system that provides rapid measurement speed for surface coverage with unprecedented axial and lateral resolution at longer standoff ranges. The technique yields a "data cube" of Fourier domain data, which can be processed with a 3D Fourier transform to reveal a 3D estimate of the surface. In this paper, we provide the theoretical background for the technique and show experimental results based on an ultra-wideband frequency modulated continuous wave (FMCW) chirped heterodyne ranging system showing ~100 micron lateral and axial precisions at >2 m standoff distances.
Plethysmographic estimation of thoracic gas volume in apneic mice.
Jánosi, Tibor Z; Adamicza, Agnes; Zosky, Graeme R; Asztalos, Tibor; Sly, Peter D; Hantos, Zoltán
2006-08-01
Electrical stimulation of intercostal muscles was employed to measure thoracic gas volume (TGV) during airway occlusion in the absence of respiratory effort at different levels of lung inflation. In 15 tracheostomized and mechanically ventilated CBA/Ca mice, the value of TGV obtained from the spontaneous breathing effort available in the early phase of the experiments (TGVsp) was compared with those resulting from muscle stimulation (TGVst) at transrespiratory pressures of 0, 10, and 20 cmH2O. A very strong correlation (r2= 0.97) was found, although with a systematically (approximately 16%) higher estimation of TGVst relative to TGVsp, attributable to the different durations of the stimulated (approximately 50 ms) and spontaneous (approximately 200 ms) contractions. Measurements of TGVst before and after injections of 0.2, 0.4, and 0.6 ml of nitrogen into the lungs in six mice resulted in good agreement between the change in TGVst and the injected volume (r2= 0.98). In four mice, TGVsp and TGVst were compared at end expiration with air or a helium-oxygen mixture to confirm the validity of isothermal compression in the alveolar gas. The TGVst values measured at zero transrespiratory pressure in all CBA/Ca mice [0.29 +/- 0.05 (SD) ml] and in C57BL/6 (N = 6; 0.34 +/- 0.08 ml) and BALB/c (N = 6; 0.28 +/- 0.06 ml) mice were in agreement with functional residual capacity values from previous studies in which different techniques were used. This method is particularly useful when TGV is to be determined in the absence of breathing activity, when it must be known at any level of lung inflation or under non-steady-state conditions, such as during pharmaceutical interventions. PMID:16645196
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Spectral estimation from laser scanner data for accurate color rendering of objects
NASA Astrophysics Data System (ADS)
Baribeau, Rejean
2002-06-01
Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less
Cheng, Lishui; Hobbs, Robert F; Segars, Paul W; Sgouros, George; Frey, Eric C
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
Zhang, Peng; Zhou, Ning; Abdollahi, Ali
2013-09-10
A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging.
Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2016-06-01
This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles. PMID:27093598
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
NASA Astrophysics Data System (ADS)
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Combining high-resolution satellite images and altimetry to estimate the volume of small lakes
NASA Astrophysics Data System (ADS)
Baup, F.; Frappart, F.; Maubant, J.
2014-05-01
This study presents an approach to determining the volume of water in small lakes (<100 ha) by combining satellite altimetry data and high-resolution (HR) images. In spite of the strong interest in monitoring surface water resources on a small scale using radar altimetry and satellite imagery, no information is available about the limits of the remote-sensing technologies for small lakes mainly used for irrigation purposes. The lake being studied is located in the south-west of France and is only used for agricultural irrigation purposes. The altimetry satellite data are provided by an RA-2 sensor onboard Envisat, and the high-resolution images (<10 m) are obtained from optical (Formosat-2) and synthetic aperture radar (SAR) antenna (Terrasar-X and Radarsat-2) satellites. The altimetry data (data are obtained every 35 days) and the HR images (77) have been available since 2003 and 2010, respectively. In situ data (for the water levels and volumes) going back to 2003 have been provided by the manager of the lake. Three independent approaches are developed to estimate the lake volume and its temporal variability. The first two approaches (HRBV and ABV) are empirical and use synchronous ground measurements of the water volume and the satellite data. The results demonstrate that altimetry and imagery can be effectively and accurately used to monitor the temporal variations of the lake (R2ABV = 0.98, RMSEABV = 5%, R2HRBV = 0.90, and RMSEABV = 7.4%), assuming a time-varying triangular shape for the shore slope of the lake (this form is well adapted since it implies a difference inferior to 2% between the theoretical volume of the lake and the one estimated from bathymetry). The third method (AHRBVC) combines altimetry (to measure the lake level) and satellite images (of the lake surface) to estimate the volume changes of the lake and produces the best results (R2AHRBVC = 0.98) of the three methods, demonstrating the potential of future Sentinel and SWOT missions to
Performance benchmarking of liver CT image segmentation and volume estimation
NASA Astrophysics Data System (ADS)
Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang
2008-03-01
In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.
Techniques for accurate estimation of net discharge in a tidal channel
Simpson, Michael R.; Bland, Roger
1999-01-01
An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1991-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras
Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew
2011-01-01
Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501
Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele
2013-12-01
We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
NASA Astrophysics Data System (ADS)
Rosenthal, Yair; Lohmann, George P.
2002-09-01
Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.
Estimating Marine Aerosol Particle Volume and Number from Maritime Aerosol Network Data
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Smirnov, A.; Hsu, N. C.; Munchak, L. A.; Holben, B. N.
2012-01-01
As well as spectral aerosol optical depth (AOD), aerosol composition and concentration (number, volume, or mass) are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN) cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET) inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS) data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The average solution MODIS dataset agrees more closely with MAN than the best solution dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data.
Takao, Seishin; Tadano, Shigeru; Taguchi, Hiroshi; Yasuda, Koichi; Onimaru, Rikiya; Ishikawa, Masayori; Bengua, Gerard; Suzuki, Ryusuke; Shirato, Hiroki
2011-11-01
Purpose: To establish a method for the accurate acquisition and analysis of the variations in tumor volume, location, and three-dimensional (3D) shape of tumors during radiotherapy in the era of image-guided radiotherapy. Methods and Materials: Finite element models of lymph nodes were developed based on computed tomography (CT) images taken before the start of treatment and every week during the treatment period. A surface geometry map with a volumetric scale was adopted and used for the analysis. Six metastatic cervical lymph nodes, 3.5 to 55.1 cm{sup 3} before treatment, in 6 patients with head and neck carcinomas were analyzed in this study. Three fiducial markers implanted in mouthpieces were used for the fusion of CT images. Changes in the location of the lymph nodes were measured on the basis of these fiducial markers. Results: The surface geometry maps showed convex regions in red and concave regions in blue to ensure that the characteristics of the 3D tumor geometries are simply understood visually. After the irradiation of 66 to 70 Gy in 2 Gy daily doses, the patterns of the colors had not changed significantly, and the maps before and during treatment were strongly correlated (average correlation coefficient was 0.808), suggesting that the tumors shrank uniformly, maintaining the original characteristics of the shapes in all 6 patients. The movement of the gravitational center of the lymph nodes during the treatment period was everywhere less than {+-}5 mm except in 1 patient, in whom the change reached nearly 10 mm. Conclusions: The surface geometry map was useful for an accurate evaluation of the changes in volume and 3D shapes of metastatic lymph nodes. The fusion of the initial and follow-up CT images based on fiducial markers enabled an analysis of changes in the location of the targets. Metastatic cervical lymph nodes in patients were suggested to decrease in size without significant changes in the 3D shape during radiotherapy. The movements of the
Higher Accurate Estimation of Axial and Bending Stiffnesses of Plates Clamped by Bolts
NASA Astrophysics Data System (ADS)
Naruse, Tomohiro; Shibutani, Yoji
Equivalent stiffness of clamped plates should be prescribed not only to evaluate the strength of bolted joints by the scheme of “joint diagram” but also to make structural analyses for practical structures with many bolted joints. We estimated the axial stiffness and bending stiffness of clamped plates by using Finite Element (FE) analyses while taking the contact condition on bearing surfaces and between the plates into account. The FE models were constructed for bolted joints tightened with M8, 10, 12 and 16 bolts and plate thicknesses of 3.2, 4.5, 6.0 and 9.0 mm, and the axial and bending compliances were precisely evaluated. These compliances of clamped plates were compared with those from VDI 2230 (2003) code, in which the equivalent conical compressive stress field in the plate has been assumed. The code gives larger axial stiffness for 11% and larger bending stiffness for 22%, and it cannot apply to the clamped plates with different thickness. Thus the code shall give lower bolt stress (unsafe estimation). We modified the vertical angle tangent, tanφ, of the equivalent conical by adding a term of the logarithm of thickness ratio t1/t2 and by fitting to the analysis results. The modified tanφ can estimate the axial compliance with the error from -1.5% to 6.8% and the bending compliance with the error from -6.5% to 10%. Furthermore, the modified tanφ can take the thickness difference into consideration.
Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.
Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes
Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
Luo, Xiongbiao
2014-06-15
electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.
An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2012-01-01
In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218
Cumulative Ocean Volume Estimates of the Solar System
NASA Astrophysics Data System (ADS)
Frank, E. A.; Mojzsis, S. J.
2010-12-01
Although there has been much consideration for habitability in silicate planets and icy bodies, this information has never been quantitatively gathered into a single approximation encompassing our solar system from star to cometary halo. Here we present an estimate for the total habitable volume of the solar system by constraining our definition of habitable environments to those to which terrestrial microbial extremophiles could theoretically be transplanted and yet survive. The documented terrestrial extremophile inventory stretches environmental constraints for habitable temperature and pH space of T ~ -15oC to 121oC and pH ~ 0 to 13.5, salinities >35% NaCl, and gamma radiation doses of 10,000 to 11,000 grays [1]. Pressure is likely not a limiting factor to life [2]. We applied these criteria in our analysis of the geophysical habitable potential of the icy satellites and small icy bodies. Given the broad spectrum of environmental tolerance, we are optimistic that our pessimistic estimates are conservative. Beyond the reaches of our inner solar system's conventional habitable zone (Earth, Mars and perhaps Venus) is Ceres, a dwarf planet in the habitable zone that could possess a significant liquid water ocean if that water contains anti-freezing species [3]. Yet further out, Europa is a small icy satellite that has generated much excitement for astrobiological potential due to its putative subsurface liquid water ocean. It is widely promulgated that the icy moons Enceladus, Triton, Callisto, Ganymede, and Titan likewise have also sustained liquid water oceans. If oceans in Europa, Enceladus, and Triton have direct contact with a rocky mantle hot enough to melt, hydrothermal vents could provide an energy source for chemotrophic organisms. Although oceans in the remaining icy satellites may be wedged between two layers of ice, their potential for life cannot be precluded. Relative to the Jovian style of icy satellites, trans-neptunian objects (TNOs) - icy bodies
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
The challenges of accurately estimating time of long bone injury in children.
Pickett, Tracy A
2015-07-01
The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum. PMID:26048508
NASA Astrophysics Data System (ADS)
Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick
2009-06-01
When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
ProViDE: A software tool for accurate estimation of viral diversity in metagenomic samples
Ghosh, Tarini Shankar; Mohammed, Monzoorul Haque; Komanduri, Dinakar; Mande, Sharmila Shekhar
2011-01-01
Given the absence of universal marker genes in the viral kingdom, researchers typically use BLAST (with stringent E-values) for taxonomic classification of viral metagenomic sequences. Since majority of metagenomic sequences originate from hitherto unknown viral groups, using stringent e-values results in most sequences remaining unclassified. Furthermore, using less stringent e-values results in a high number of incorrect taxonomic assignments. The SOrt-ITEMS algorithm provides an approach to address the above issues. Based on alignment parameters, SOrt-ITEMS follows an elaborate work-flow for assigning reads originating from hitherto unknown archaeal/bacterial genomes. In SOrt-ITEMS, alignment parameter thresholds were generated by observing patterns of sequence divergence within and across various taxonomic groups belonging to bacterial and archaeal kingdoms. However, many taxonomic groups within the viral kingdom lack a typical Linnean-like taxonomic hierarchy. In this paper, we present ProViDE (Program for Viral Diversity Estimation), an algorithm that uses a customized set of alignment parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom. Validation results indicate that the percentage of ‘correct’ assignments by ProViDE is around 1.7 to 3 times higher than that by the widely used similarity based method MEGAN. The misclassification rate of ProViDE is around 3 to 19% (as compared to 5 to 42% by MEGAN) indicating significantly better assignment accuracy. ProViDE software and a supplementary file (containing supplementary figures and tables referred to in this article) is available for download from http://metagenomics.atc.tcs.com/binning/ProViDE/ PMID:21544173
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Figure of merit of diamond power devices based on accurately estimated impact ionization processes
NASA Astrophysics Data System (ADS)
Hiraiwa, Atsushi; Kawarada, Hiroshi
2013-07-01
Although a high breakdown voltage or field is considered as a major advantage of diamond, there has been a large difference in breakdown voltages or fields of diamond devices in literature. Most of these apparently contradictory results did not correctly reflect material properties because of specific device designs, such as punch-through structure and insufficient edge termination. Once these data were removed, the remaining few results, including a record-high breakdown field of 20 MV/cm, were theoretically reproduced, exactly calculating ionization integrals based on the ionization coefficients that were obtained after compensating for possible errors involved in reported theoretical values. In this compensation, we newly developed a method for extracting an ionization coefficient from an arbitrary relationship between breakdown voltage and doping density in the Chynoweth's framework. The breakdown field of diamond was estimated to depend on the doping density more than other materials, and accordingly required to be compared at the same doping density. The figure of merit (FOM) of diamond devices, obtained using these breakdown data, was comparable to the FOMs of 4H-SiC and Wurtzite-GaN devices at room temperature, but was projected to be larger than the latter by more than one order of magnitude at higher temperatures about 300 °C. Considering the relatively undeveloped state of diamond technology, there is room for further enhancement of the diamond FOM, improving breakdown voltage and mobility. Through these investigations, junction breakdown was found to be initiated by electrons or holes in a p--type or n--type drift layer, respectively. The breakdown voltages in the two types of drift layers differed from each other in a strict sense but were practically the same. Hence, we do not need to care about the conduction type of drift layers, but should rather exactly calculate the ionization integral without approximating ionization coefficients by a power
Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.
NASA Astrophysics Data System (ADS)
Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke
2013-04-01
temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules
NASA Technical Reports Server (NTRS)
Ferguson, Connor R.; Lee, Stuart M. C.; Stenger, Michael B.; Platts, Steven H.; Laurie, Steven S.
2014-01-01
Orthostatic intolerance affects 60-80% of astronauts returning from long-duration missions, representing a significant risk to completing mission-critical tasks. While likely multifactorial, a reduction in stroke volume (SV) represents one factor contributing to orthostatic intolerance during stand and head up tilt (HUT) tests. Current measures of SV during stand or HUT tests use Doppler ultrasound and require a trained operator and specialized equipment, restricting its use in the field. BeatScope (Finapres Medical Systems BV, The Netherlands) uses a modelflow algorithm to estimate SV from continuous blood pressure waveforms in supine subjects; however, evidence supporting the use of Modelflow to estimate SV in subjects completing stand or HUT tests remain scarce. Furthermore, because the blood pressure device is held extended at heart level during HUT tests, but allowed to rest at the side during stand tests, changes in the finger arterial pressure waveform resulting from arm positioning could alter modelflow estimated SV. The purpose of this project was to compare Doppler ultrasound and BeatScope estimations of SV to determine if BeatScope can be used during stand or HUT tests. Finger photoplethysmography was used to acquire arterial pressure waveforms corrected for hydrostatic finger-to-heart height using the Finometer (FM) and Portapres (PP) arterial pressure devices in 10 subjects (5 men and 5 women) during a stand test while simultaneous estimates of SV were collected using Doppler ultrasound. Measures were made after 5 minutes of supine rest and while subjects stood for 5 minutes. Next, SV estimates were reacquired while each arm was independently raised to heart level, a position similar to tilt testing. Supine SV estimates were not significantly different between all three devices (FM: 68+/-20, PP: 71+/-21, US: 73+/-21 ml/beat). Upon standing, the change in SV estimated by FM (-18+/-8 ml) was not different from PP (-21+/-12), but both were significantly
NASA Astrophysics Data System (ADS)
Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto
2015-12-01
In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to
Deepwater Horizon - Estimating surface oil volume distribution in real time
NASA Astrophysics Data System (ADS)
Lehr, B.; Simecek-Beatty, D.; Leifer, I.
2011-12-01
Spill responders to the Deepwater Horizon (DWH) oil spill required both the relative spatial distribution and total oil volume of the surface oil. The former was needed on a daily basis to plan and direct local surface recovery and treatment operations. The latter was needed less frequently to provide information for strategic response planning. Unfortunately, the standard spill observation methods were inadequate for an oil spill this size, and new, experimental, methods, were not ready to meet the operational demands of near real-time results. Traditional surface oil estimation tools for large spills include satellite-based sensors to define the spatial extent (but not thickness) of the oil, complemented with trained observers in small aircraft, sometimes supplemented by active or passive remote sensing equipment, to determine surface percent coverage of the 'thick' part of the slick, where the vast majority of the surface oil exists. These tools were also applied to DWH in the early days of the spill but the shear size of the spill prevented synoptic information of the surface slick through the use small aircraft. Also, satellite images of the spill, while large in number, varied considerably in image quality, requiring skilled interpretation of them to identify oil and eliminate false positives. Qualified staff to perform this task were soon in short supply. However, large spills are often events that overcome organizational inertia to the use of new technology. Two prime examples in DWH were the application of hyper-spectral scans from a high-altitude aircraft and more traditional fixed-wing aircraft using multi-spectral scans processed by use of a neural network to determine, respectively, absolute or relative oil thickness. But, with new technology, come new challenges. The hyper-spectral instrument required special viewing conditions that were not present on a daily basis and analysis infrastructure to process the data that was not available at the command
Driver, Nancy E.; Tasker, Gary D.
1990-01-01
Urban planners and managers need information on the quantity of precipitation and the quality and quantity of run off in their cities and towns if they are to adequately plan for the effects of storm runoff from urban areas. As a result of this need, four sets of linear regression models were developed for estimating storm-runoff constituent loads, storm-runoff volumes, storm-runoff mean concentrations of constituents, and mean seasonal or mean annual constituent loads from physical, land-use, and climatic characteristics of urban watersheds in the United States. Thirty-four regression models of storm-runoff constituent loads and storm-runoff volumes were developed, and 31 models of storm-runoff mean concentrations were developed . Ten models of mean seasonal or mean annual constituent loads were developed by analyzing long-term storm-rainfall records using at-site linear regression models. Three statistically different regions, delineated on the basis of mean annual rainfall, were used to improve linear regression models where adequate data were available . Multiple regression analyses, including ordinary least squares and generalized least squares, were used to determine the optimum linear regression models . These models can be used to estimate storm-runoff constituent loads, storm-runoff volumes, storm-runoff mean concentrations of constituents, and mean seasonal or mean annual constituent loads at gaged and ungaged urban watersheds. The most significant explanatory variables in all linear regression models were total storm rainfall and total contributing drainage area. Impervious area, land-use, and mean annual climatic characteristics also were significant in some models. Models for estimating loads of dissolved solids, total nitrogen, and total ammonia plus organic nitrogen as nitrogen generally were the most accurate, whereas models for suspended solids were the least accurate. The most accurate models were those for application in the more arid Western
Using GIS to Estimate Lake Volume from Limited Data (Lake and Reservoir Management)
Estimates of lake volume are necessary for calculating residence time and modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of...
An experimental result of estimating an application volume by machine learning techniques.
Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko
2015-01-01
In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka. PMID:25713755
Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R
2016-07-21
Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269
NASA Astrophysics Data System (ADS)
Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.
2011-12-01
Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
Bae, Kyongtae T; Tao, Cheng; Wang, Jinhong; Kaya, Diana; Wu, Zhiyuan; Bae, Junu T; Chapman, Arlene B; Torres, Vicente E; Grantham, Jared J; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P
2013-01-01
Objective To evaluate whether kidney and cyst volumes can be accurately estimated based on limited area measurements from MR images of patients with autosomal dominant polycystic kidney disease (ADPKD). Materials and Methods MR coronal images of 178 ADPKD participants from the Consortium for Radiologic Imaging Studies of ADPKD (CRISP) were analyzed. For each MR image slice, we measured kidney and renal cyst areas using stereology and region-based thresholding methods, respectively. The kidney and cyst ‘observed’ volumes were calculated by summing up the area measurements of all the slices covering the kidney. To estimate the volume, we selected a coronal mid-slice in each kidney and multiplied its area by the total number of slices (‘PANK2’ for kidney and ‘PANC2’ for cyst). We then compared the kidney and cyst volumes predicted from PANK2 and PANC2, respectively, to the corresponding observed volumes, using a linear regression analysis. Results The kidney volume predicted from PANK2 correlated extremely well with the observed kidney volume: R2=0.994 for right and 0.991 for left kidney. The linear regression coefficient multiplier to PANK2 that best fit the kidney volume was 0.637 (95%CI: 0.629–0.644) for right and 0.624 (95%CI: 0.616–0.633) for left kidney. The correlation between the cyst volume predicted from PANC2 and the observed cyst volume was also very high: R2=0.984 for right and 0.967 for left kidney. The least squares linear regression coefficient for PANC2 was 0.637 (95%CI: 0.624–0.649) for right and 0.608 (95%CI: 0.591–0.625) for left kidney. Conclusion Kidney and cyst volumes can be closely approximated by multiplying the product of the mid-slice area measurement and the total number of slices in the coronal MR images of ADPKD kidneys by 0.61–0.64. This information will help save processing time needed to estimate total kidney and cyst volumes of ADPKD kidneys. PMID:24107679
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
NASA Astrophysics Data System (ADS)
Iyatomi, Hitoshi; Hashimoto, Jun; Yoshii, Fumuhito; Kazama, Toshiki; Kawada, Shuichi; Imai, Yutaka
2014-03-01
Discrimination between Alzheimer's disease and other dementia is clinically significant, however it is often difficult. In this study, we developed classification models among Alzheimer's disease (AD), other dementia (OD) and/or normal subjects (NC) using patient factors and indices obtained by brain perfusion SPECT. SPECT is commonly used to assess cerebral blood flow (CBF) and allows the evaluation of the severity of hypoperfusion by introducing statistical parametric mapping (SPM). We investigated a total of 150 cases (50 cases each for AD, OD, and NC) from Tokai University Hospital, Japan. In each case, we obtained a total of 127 candidate parameters from: (A) 2 patient factors (age and sex), (B) 12 CBF parameters and 113 SPM parameters including (C) 3 from specific volume analysis (SVA), and (D) 110 from voxel-based analysis stereotactic extraction estimation (vbSEE). We built linear classifiers with a statistical stepwise feature selection and evaluated the performance with the leave-one-out cross validation strategy. Our classifiers achieved very high classification performances with reasonable number of selected parameters. In the most significant discrimination in clinical, namely those of AD from OD, our classifier achieved both sensitivity (SE) and specificity (SP) of 96%. In a similar way, our classifiers achieved a SE of 90% and a SP of 98% in AD from NC, as well as a SE of 88% and a SP of 86% in AD from OD and NC cases. Introducing SPM indices such as SVA and vbSEE, classification performances improved around 7-15%. We confirmed that these SPM factors are quite important for diagnosing Alzheimer's disease.
Sherwood, J.M.
1993-01-01
Methods are presented for estimating flood volumes and simulating flood hydrographs of rural streams in Ohio whose drainage areas are less than 6.5 square miles. The methods were developed to assist engineers in the design of hydraulic structures for which the temporary storage of water is a critical element of the design criteria. Examples of how to use the methods also are presented. Multiple-regression equations were developed to estimate maximum flood volumes of d-hour duration and T-year recurrence interval (dVT). Flood-volume data for all combinations of six durations (1, 2, 4, 8, 16, and 32 hours) and six recurrence intervals (2, 5, 10, 25, 50, and 100 years) were analyzed. The significant independent variables in the resulting 36 equations are drainage area, average annual precipitation, main-channel slope, and forested area. Standard errors of prediction for the 36 dVT equations range from +28 percent to +44 percent. A method is described for simulating flood hydrographs by applying a peak discharge and an estimated basin lagtime to a dimensionless hydrograph. Peak discharge may be estimated from equations in which drainage area, main-channel slope, and storage area are the significant explanatory variables, and average standard errors of prediction range from +33 to +41 percent. An equation is developed for estimating basin lagtime in which main-channel slope, forested area, and storage area are the significant explanatory variables, and the average standard error of prediction is +37 percent. A dimensionless hydrograph developed for use in Georgia was verified for use in Ohio. Step-by-step examples show how to (1) simulate flood hydrographs and compute their volumes, and (2) estimate volume-duration-frequency relations of small ungaged rural streams in Ohio. The volumes estimated by the two methods are compared. Both methods yield similar results for volume estimates of short duration, which are applicable to convective-type storm runoff. The volume
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
NASA Astrophysics Data System (ADS)
Arroyo, Renaldo Josue Salazar
The Mississippi Institute for Forest Inventory (MIFI) is the only cost-effective large-scale forest inventory system in the United States with sufficient precision for producing reliable volume/weight/biomass estimates for small working circle areas (procurement areas). When forest industry is recruited to Mississippi, proposed working circles may overlap existing boundaries of bordering states leaving a gap of inventory information, and a remote sensing-based system for augmenting missing ground inventory data is desirable. The feasibility of obtaining acceptable cubic foot volume estimates from a Landsat-derived volume estimation model (Wilkinson 2011) was assessed by: 1) an initial study to temporally validate Landsat-derived cubic foot volume outside bark to a pulpwood top estimates in comparison with MIFI ground truth inventory plot estimates at two separate time periods, and 2) re-developing a regression model based on remotely sensed imagery in combination with available MIFI plot data. Initial results failed to confirm the relationships shown in past research between radiance values and volume estimation. The complete lack of influence of radiance values in the model led to a re-assessment of volume estimation schemes. Data outlier trimming manipulation was discovered to lead to false relationships with radiance values reported in past research. Two revised volume estimation models using age, average stand height, and trees per-acre and age and height alone as independent variables were found sufficient to explain variation of volume across the image. These results were used to develop a procedure for other remote sensing technologies that could produce data with sufficient precision for volume estimation where inventory data are sparse or non-existent.
Estimating the dimensions of the SEU-sensitive volume
Abdel-Kader, W.G.; McNulty, P.J.; El-Teleaty, S.; Lynch, J.E.; Khondker, A.N.
1987-12-01
Simulations of the diffusion contribution to charge collection in SEU events are carried out under the simple assumption of random walk. The results of the simulation are combined with calculations of the funneling length for the field-assisted drift components to determine the effective thickness of the sensitive volume element to be used in calculations of soft-error rates for heavy-ion-induced and proton-induced upsets in microelectronic circuits. Comparison is made between predicted and measured SEU cross-sections for devices for which the critical charges are known from electrical measurements and the dimensions of the sensitive volume used are determined by the techniques described. The agreement is sufficient to encourage confidence that SEU rates can be calculated from first principles and a knowledge of the material, structural, and electrical characteristics of the device.
A method of estimating flood volumes in western Kansas
Perry, C.A.
1984-01-01
Relationships between flood volume and peak discharge in western Kansas were developed considering basin and climatic characteristics in order to evaluate the availability of surface water in the area. Multiple-regression analyses revealed a relationship between flood volume, peak discharge, channel slope , and storm duration for basins smaller than 1,503 square miles. The equation VOL=0.536 PEAK1.71 SLOPE-0.85 DUR0.24, had a correlation coefficient of R=0.94 and a standard error of 0.33 log units (-53 and +113 percent). A better relationship for basins smaller than 228 square miles resulted in the equation VOL=0.483 PEAK0.98 SLOPE-0.74 AREA0.30, which had a correlation coefficient of R=0.90 and a standard error of 0.23 log units (-41 and +70 percent). (USGS)
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
New method to estimate variability in groundwater volumes
NASA Astrophysics Data System (ADS)
Wendel, JoAnna
2014-04-01
As the warming climate and increasing population put stress on the world's water supply, it has become increasingly important to have a global understanding of how groundwater volumes vary from season to season and from year to year. Current global hydrological models do not include lateral groundwater flow, which plays a significant role in providing water to plants and in recharging lakes, rivers, and streams.
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
A plan for accurate estimation of daily area-mean rainfall during the CaPE experiment
NASA Technical Reports Server (NTRS)
Duchon, Claude E.
1992-01-01
The Convection and Precipitation/Electrification (CaPE) experiment took place in east central Florida from 8 July to 18 August, 1991. There were five research themes associated with CaPE. In broad terms they are: investigation of the evolution of the electric field in convective clouds, determination of meteorological and electrical conditions associated with lightning, development of mesoscale numerical forecasts (2-12 hr) and nowcasts (less than 2 hr) of convective initiation and remote estimation of rainfall. It is the last theme coupled with numerous raingage and streamgage measurements, satellite and aircraft remote sensing, radiosondes and other meteorological measurements in the atmospheric boundary layer that provide the basis for determining the hydrologic cycle for the CaPE experiment area. The largest component of the hydrologic cycle in this region is rainfall. An accurate determination of daily area-mean rainfall is important in correctly modeling its apportionment into runoff, infiltration and evapotranspiration. In order to achieve this goal a research plan was devised and initial analysis begun. The overall research plan is discussed with special emphasis placed on the adjustment of radar rainfall estimates to raingage rainfall.
Space tug economic analysis study. Volume 3: Cost estimates
NASA Technical Reports Server (NTRS)
1972-01-01
Cost estimates for the space tug operation are presented. The subjects discussed are: (1) research and development costs, (2) investment costs, (3) operations costs, and (4) funding requirements. The emphasis is placed on the single stage tug configuration using various types of liquid propellants.
Towards a robust method for estimation of volume changes of Mountain glaciers from ICESat data
NASA Astrophysics Data System (ADS)
Kropacek, J.; Neckel, N.
2012-04-01
Worldwide estimation of recent glacier volume changes is still a challenge that can only be faced by an instrument mounted on a revisiting satellite platform. NASA's ICESat (Ice Cloud and Elevation Satellite) mission was primarily dedicated for mass balance studies of the continental ice sheets of Greenland and Antarctica. ICESat's Geoscience Laser Altimeter System (GLAS) provides accurate elevation estimates derived from the two way travel time of the emitted laser pulse. ICESat offers unlike radar altimeters conveniently small footprints (~72 m) with a spacing of 170 m along the nadir track. The date were acquired during 19 campaigns where each campaign provides under favorable conditions one flyover. The intersections of the track with glacier areas are random but it provides adequate data coverage for major mountain ranges. Estimation of height changes of mountain glaciers crossed by repeated ICESat tracks is hindered by rough topography, distances between repeated tracks (up to 3000 m), effects of saturation and often low accuracy of the available digital elevation models. Accuracy and limitations of two methods were compared: statistical approach where the differences of ICESat heights to a reference DEM are averaged and analytical approach in which only almost spatially identical tracks are analyzed. In both approaches accumulation and ablation areas were treated separately. As a test bed Aletsch Glacier in Swiss Alps with plenty of auxiliary datasets such as DEMs, topographic maps, climate data etc. was used. The ICESat data is well spaced over the glacier area and includes both accumulation and ablation areas plus glacier terminus. Preliminary results show a good agreement between both the approaches.
[Estimation of forest volume in Huzhong forest area based on RS, GIS and ANN].
Liu, Zhi-Hua; Chang, Yu; Chen, Hong-Wei
2008-09-01
Based on remote sensing (RS) which has integrated and realistic characteristics, geographic information system (GIS) which has powerful spatial analysis ability, and artificial neutral network (ANN) which can optimize nonlinear complex systems, the forest volume in Huzhong forest area was estimated. The results showed that there was an obvious negative correlation between the forest volume and infrared band, indicating that infrared band had definite potential in estimating forest volume. The forest volume also negatively correlated with visible band and PC1. Among the topographic factors, altitude exerted more influence than aspect and slope on the estimation of forest volume. The correlation coefficient of predicted value and actual value reached to 0.973, when the optimal ANN parameter, suitable GIS information, and RS bands were adopted. After principal component transformation, the amount of observation data was effectively reduced, while the predicted precision only had a small decline (R2 = 0.934). PMID:19102299
Multi-model ensemble estimation of volume transport through the straits of the East/Japan Sea
NASA Astrophysics Data System (ADS)
Han, Sooyeon; Hirose, Naoki; Usui, Norihisa; Miyazawa, Yasumasa
2016-01-01
The volume transports measured at the Korea/Tsushima, Tsugaru, and Soya/La Perouse Straits remain quantitatively inconsistent. However, data assimilation models at least provide a self-consistent budget despite subtle differences among the models. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate more accurately transport at these straits by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.43 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.
Estimation of rat mammary tumor volume using caliper and ultrasonography measurements.
Faustino-Rocha, Ana; Oliveira, Paula A; Pinho-Oliveira, Jacinta; Teixeira-Guedes, Catarina; Soares-Maia, Ruben; da Costa, Rui Gil; Colaço, Bruno; Pires, Maria João; Colaço, Jorge; Ferreira, Rita; Ginja, Mário
2013-06-01
Mammary tumors similar to those observed in women can be induced in rats by intraperitoneal administration of N-methyl-N-nitrosourea. Determining tumor volume is a useful and quantitative way to monitor tumor progression. In this study, the authors measured dimensions of rat mammary tumors using a caliper and using real-time compound B-mode ultrasonography. They then used different formulas to calculate tumor volume from these tumor measurements and compared the calculated tumor volumes with the real tumor volume to identify the formulas that gave the most accurate volume calculations. They found that caliper and ultrasonography measurements were significantly correlated but that tumor volumes calculated using different formulas varied substantially. Mammary tumors seemed to take on an oblate spheroid geometry. The most accurate volume calculations were obtained using the formula V = (W(2) × L)/2 for caliper measurements and the formula V = (4/3) × π × (L/2) × (L/2) × (D/2) for ultrasonography measurements, where V is tumor volume, W is tumor width, L is tumor length and D is tumor depth. PMID:23689461
K West Basin sludge volume estimates for integrated water treatment system
Pitner, A.L.
1998-08-19
This document provides estimates of the volume of sludge (1) expected from Integrated Process Strategy (IPS) processing of the fuel elements and (2) in the fuel storage canisters in K West Basin. The original estimates were based on visual observations of fuel element condition in the basin and laboratory measurements of KE canister sludge density. Revision 1 revised the volume estimates of sludge based on additional data from evaluations of material from the KW Basin fuel subsurface examinations and KW canister sludge characterization data. A nominal Working Estimate and an upper level Working Bound is developed for the canister sludge and the fuel wash sludge components in the KW Basin.
K East basin sludge volume estimates for integrated water treatment system
Pearce, K.L.
1998-08-19
This document provides estimates of the volume of sludge expected from Integrated Process Strategy (IPS) processing of the fuel elements and in the fuel storage canisters in K East Basin. The original estimates were based on visual observations of fuel element condition in the basin and laboratory measurements of canister sludge density. Revision 1 revised the volume estimates of sludge from processing of the fuel elements based on additional data from evaluations of material from the KE Basin fuel subsurface examinations. A nominal Working Estimate and an upper level Working Bound is developed for the canister sludge and the fuel wash sludge components in the KE Basin.
Estimation of Surface Area and Volume of a Nematode from Morphometric Data
Brown, Simon; Pedley, Kevin C.; Simcock, David C.
2016-01-01
Nematode volume and surface area are usually based on the inappropriate assumption that the animal is cylindrical. While nematodes are approximately circular in cross section, the radius varies longitudinally. We use standard morphometric data to obtain improved estimates of volume and surface area based on (i) a geometrical approach and (ii) a Bézier representation of the nematode. These new estimators require only the morphometric data available from Cobb's ratios, but if fewer coordinates are available the geometric approach reduces to the standard estimates. Consequently, these new estimators are better than the standard alternatives. PMID:27110427
Budget estimates: Fiscal year 1994. Volume 2: Construction of facilities
NASA Technical Reports Server (NTRS)
1994-01-01
The Construction of Facilities (CoF) appropriation provides contractual services for the repair, rehabilitation, and modification of existing facilities; the construction of new facilities and the acquisition of related collateral equipment; the acquisition or condemnation of real property; environmental compliance and restoration activities; the design of facilities projects; and advanced planning related to future facilities needs. Fiscal year 1994 budget estimates are broken down according to facility location of project and by purpose.
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. PMID:19880241
Lunar Architecture Team - Phase 2 Habitat Volume Estimation: "Caution When Using Analogs"
NASA Technical Reports Server (NTRS)
Rudisill, Marianne; Howard, Robert; Griffin, Brand; Green, Jennifer; Toups, Larry; Kennedy, Kriss
2008-01-01
The lunar surface habitat will serve as the astronauts' home on the moon, providing a pressurized facility for all crew living functions and serving as the primary location for a number of crew work functions. Adequate volume is required for each of these functions in addition to that devoted to housing the habitat systems and crew consumables. The time constraints of the LAT-2 schedule precluded the Habitation Team from conducting a complete "bottoms-up" design of a lunar surface habitation system from which to derive true volumetric requirements. The objective of this analysis was to quickly derive an estimated total pressurized volume and pressurized net habitable volume per crewmember for a lunar surface habitat, using a principled, methodical approach in the absence of a detailed design. Five "heuristic methods" were used: historical spacecraft volumes, human/spacecraft integration standards and design guidance, Earth-based analogs, parametric "sizing" tools, and conceptual point designs. Estimates for total pressurized volume, total habitable volume, and volume per crewmember were derived using these methods. All method were found to provide some basis for volume estimates, but values were highly variable across a wide range, with no obvious convergence of values. Best current assumptions for required crew volume were provided as a range. Results of these analyses and future work are discussed.
ERIC Educational Resources Information Center
Systems Group, Inc., Washington, DC.
Volumes 1-4 of the GSLP Loan Estimation Model present the historical and legislative background of the Guaranteed Student Loan Program, give an analysis of the data base used to develop the GSLP Loan Estimation Model, and discuss the development and operation of the model. Volume 1 provides a brief description of the legislative authority for the…
Danjon, Frédéric; Caplan, Joshua S.; Fortin, Mathieu; Meredieu, Céline
2013-01-01
Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately. PMID
NASA Astrophysics Data System (ADS)
Hibert, C.; Mangeney, A.; Grandjean, G.; Baillard, C.; Rivet, D.; Shapiro, N. M.; Satriano, C.; Maggi, A.; Boissier, P.; Ferrazzini, V.; Crawford, W.
2014-05-01
Since the collapse of the Dolomieu crater floor at Piton de la Fournaise Volcano (la Réunion) in 2007, hundreds of seismic signals generated by rockfalls have been recorded daily at the Observatoire Volcanologique du Piton de la Fournaise (OVPF). To study rockfall activity over a long period of time, automated methods are required to process the available continuous seismic records. We present a set of automated methods designed to identify, locate, and estimate the volume of rockfalls from their seismic signals. The method used to automatically discriminate seismic signals generated by rockfalls from other common events recorded at OVPF is based on fuzzy sets and has a success rate of 92%. A kurtosis-based automated picking method makes it possible to precisely pick the onset time and the final time of the rockfall-generated seismic signals. We present methods to determine rockfall locations based on these accurate pickings and a surface-wave propagation model computed for each station using a Fast Marching Method. These methods have successfully located directly observed rockfalls with an accuracy of about 100 m. They also make it possible to compute the seismic energy generated by rockfalls, which is then used to retrieve their volume. The methods developed were applied to a data set of 12,422 rockfalls that occurred over a period extending from the collapse of the Dolomieu crater floor in April 2007 to the end of the UnderVolc project in May 2011 to identify the most hazardous areas of the Piton de la Fournaise volcano summit.
On-line dialysate infusion to estimate absolute blood volume in dialysis patients.
Schneditz, Daniel; Schilcher, Gernot; Ribitsch, Werner; Krisper, Peter; Haditsch, Bernd; Kron, Joachim
2014-01-01
It was the aim to measure the distribution volume and the elimination of ultra-pure dialysate in stable hemodialysis patients during on-line hemodiafiltration (HDF). Dialysate was automatically infused as a volume indicator using standard on-line HDF equipment. Indicator concentration was noninvasively measured in the arterial blood-line (using the blood volume monitor, Fresenius Medical Care, Bad Homburg vor der Höhe, Germany), and its time course was analyzed to obtain the elimination rate and the distribution volume V(t) at the time of dilution. Blood volume at treatment start (V0) was calculated accounting for the degree of intradialytic hemoconcentration. Five patients (two females) were studied during 15 treatments. Two to six measurements using indicator volumes ranging from 60 to 210 ml were done in each treatment. V0 was 4.59 ± 1.15 L and larger than the volume of 4.08 ± 0.48 L estimated from anthropometric relationships. The mean half-life of infused volume was 17.2 ± 29.7 min. Given predialysis volume expansion V0 was consistent with blood volume determined from anthropometric measurements. Information on blood volume could substantially improve volume management in hemodialysis patients and fluid therapy in intensive care patients undergoing extracorporeal blood treatment. The system has the potential for complete automation using proper control inputs for BVM and HDF modules of the dialysis machine. PMID:24814842
Volume-based thermodynamics: estimations for 2:2 salts.
Jenkins, H Donald Brooke; Glasser, Leslie
2006-02-20
The lattice energy of an ionic crystal, U(POT), can be expressed as a linear function of the inverse cube root of its formula unit volume (i.e., Vm(-1/3)); thus, U(POT) approximately 2I(alpha/Vm(1/3) + beta), where alpha and beta are fitted constants and I is the readily calculated ionic strength factor of the lattice. The standard entropy, S, is a linear function of Vm itself: S approximately kVm + c, with fitted constants k and c. The constants alpha and beta have previously been evaluated for salts with charge ratios of 1:1, 1:2, and 2:1 and for the general case q:p, while values of k and c applicable to ionic solids generally have earlier been reported. In this paper, we obtain alpha and beta, k and c, specifically for 2:2 salts (by studying the ionic oxides, sulfates, and carbonates), finding that U(POT)[MX 2:2]/(kJ mol(-1)) approximately 8(119/Vm(1/3) + 60) and S degree [MX 2:2]/(J K(-1) mol(-1)) approximately 1382V(m) + 16. PMID:16471990
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
1998-02-01
This volume contains information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the six options described in Volume 1, Section 2: Option 1 -- Total removal clean closure; No subsequent use; Option 2 -- Risk-based clean closure; LLW fill; Option 3 -- Risk-based clean closure; CERCLA fill; Option 4 -- Close to RCRA landfill standards; LLW fill; Option 5 -- Close to RCRA landfill standards; CERCLA fill; and Option 6 -- Close to RCRA landfill standards; Clean fill. This volume is divided into two portions. The first portion contains the cost and planning schedule estimates while the second portion contains life-cycle costs and yearly cash flow information for each option.
Budget estimates: Fiscal year 1994. Volume 1: Agency summary
NASA Technical Reports Server (NTRS)
1994-01-01
The NASA FY 1994 budget request of $15,265 million concentrates on (1) investing in the development of new technologies including a particularly aggressive program in aeronautical technology to improve the competitive position of the United States, through shared involvement with industry and other government agencies; (2) continuing the nation's premier program of space exploration, to expand our knowledge of the solar system and the universe as well as the earth; and (3) providing safe and assured access to space using both the space shuttle and expendable launch vehicles. Budget estimates are presented for (1) research and development, including space station, space transportation capability development, space science and applications programs, space science, life and microgravity sciences and applications, mission to planet earth, space research and technology, commercial programs, aeronautics technology programs, safety and mission quality, academic programs, and tracking and data advanced systems; and (2) space operations, including space transportation programs, launch services, and space communications.
Estimation of adipose compartment volumes in CT images of a mastectomy specimen
NASA Astrophysics Data System (ADS)
Imran, Abdullah-Al-Zubaer; Pokrajac, David D.; Maidment, Andrew D. A.; Bakic, Predrag R.
2016-03-01
Anthropomorphic software breast phantoms have been utilized for preclinical quantitative validation of breast imaging systems. Efficacy of the simulation-based validation depends on the realism of phantom images. Anatomical measurements of the breast tissue, such as the size and distribution of adipose compartments or the thickness of Cooper's ligaments, are essential for the realistic simulation of breast anatomy. Such measurements are, however, not readily available in the literature. In this study, we assessed the statistics of adipose compartments as visualized in CT images of a total mastectomy specimen. The specimen was preserved in formalin, and imaged using a standard body CT protocol and high X-ray dose. A human operator manually segmented adipose compartments in reconstructed CT images using ITK-SNAP software, and calculated the volume of each compartment. In addition, the time needed for the manual segmentation and the operator's confidence were recorded. The average volume, standard deviation, and the probability distribution of compartment volumes were estimated from 205 segmented adipose compartments. We also estimated the potential correlation between the segmentation time, operator's confidence, and compartment volume. The statistical tests indicated that the estimated compartment volumes do not follow the normal distribution. The compartment volumes are found to be correlated with the segmentation time; no significant correlation between the volume and the operator confidence. The performed study is limited by the mastectomy specimen position. The analysis of compartment volumes will better inform development of more realistic breast anatomy simulation.
Lopes, Marta B; Wolff, Jean-Claude; Bioucas-Dias, José M; Figueiredo, Mário A T
2010-02-15
A rapid detection of the nonauthenticity of suspect tablets is a key first step in the fight against pharmaceutical counterfeiting. The chemical characterization of these tablets is the logical next step to evaluate their impact on patient health and help authorities in tracking their source. Hyperspectral unmixing of near-infrared (NIR) image data is an emerging effective technology to infer the number of compounds, their spectral signatures, and the mixing fractions in a given tablet, with a resolution of a few tens of micrometers. In a linear mixing scenario, hyperspectral vectors belong to a simplex whose vertices correspond to the spectra of the compounds present in the sample. SISAL (simplex identification via split augmented Lagrangian), MVSA (minimum volume simplex analysis), and MVES (minimum-volume enclosing simplex) are recent algorithms designed to identify the vertices of the minimum volume simplex containing the spectral vectors and the mixing fractions at each pixel (vector). This work demonstrates the usefulness of these techniques, based on minimum volume criteria, for unmixing NIR hyperspectral data of tablets. The experiments herein reported show that SISAL/MVSA and MVES largely outperform MCR-ALS (multivariate curve resolution-alternating least-squares), which is considered the state-of-the-art in spectral unmixing for analytical chemistry. These experiments are based on synthetic data (studying the effect of noise and the presence/absence of pure pixels) and on a real data set composed of NIR images of counterfeit tablets. PMID:20095581
Elci, Hakan; Turk, Necdet
2014-01-01
Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (J(v)) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (V(b)), the mean volumetric joint count (J(vb)) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (V(in)) and volumetric joint count (J(vi)) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements. PMID:24696642
Elci, Hakan; Turk, Necdet
2014-01-01
Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (Jv) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (Vb), the mean volumetric joint count (Jvb) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (Vin) and volumetric joint count (Jvi) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements. PMID:24696642
K East Basin sludge volume estimates for integrated water treatment system
Pitner, A.L.
1998-08-12
Estimates were made of the volume of sludge expected from Integrated Process Strategy (IPS) processing fuel elements and in the fuel storage canisters in K East Basin, These were based on visual observations of fuel element condition in the basin and laboratory measurements of canister sludge density. The estimates, made in early 1997, are reviewed and the basic assumptions used discussed.
Multi-scale deep networks and regression forests for direct bi-ventricular volume estimation.
Zhen, Xiantong; Wang, Zhijie; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2016-05-01
Direct estimation of cardiac ventricular volumes has become increasingly popular and important in cardiac function analysis due to its effectiveness and efficiency by avoiding an intermediate segmentation step. However, existing methods rely on either intensive user inputs or problematic assumptions. To realize the full capacities of direct estimation, this paper presents a general, fully learning-based framework for direct bi-ventricular volume estimation, which removes user inputs and unreliable assumptions. We formulate bi-ventricular volume estimation as a general regression framework which consists of two main full learning stages: unsupervised cardiac image representation learning by multi-scale deep networks and direct bi-ventricular volume estimation by random forests. By leveraging strengths of generative and discriminant learning, the proposed method produces high correlations of around 0.92 with ground truth by human experts for both the left and right ventricles using a leave-one-subject-out cross validation, and largely outperforms existing direct methods on a larger dataset of 100 subjects including both healthy and diseased cases with twice the number of subjects used in previous methods. More importantly, the proposed method can not only be practically used in clinical cardiac function analysis but also be easily extended to other organ volume estimation tasks. PMID:26919699
Estimation of tephra volumes from sparse and incompletely observed deposit thicknesses
NASA Astrophysics Data System (ADS)
Green, Rebecca M.; Bebbington, Mark S.; Jones, Geoff; Cronin, Shane J.; Turner, Michael B.
2016-04-01
We present a Bayesian statistical approach to estimate volumes for a series of eruptions from an assemblage of sparse proximal and distal tephra (volcanic ash) deposits. Most volume estimates are of widespread tephra deposits from large events using isopach maps constructed from observations at exposed locations. Instead, we incorporate raw thickness measurements, focussing on tephra thickness data from cores extracted from lake sediments and through swamp deposits. This facilitates investigation into the dispersal pattern and volume of tephra from much smaller eruption events. Given the general scarcity of data and the physical phenomena governing tephra thickness attenuation, a hybrid Bayesian-empirical tephra attenuation model is required. Point thickness observations are modeled as a function of the distance and angular direction of each location. The dispersal of tephra from larger well-estimated eruptions are used as leverage for understanding the smaller unknown events, and uncertainty in thickness measurements can be properly accounted for. The model estimates the wind and site-specific effects on the tephra deposits in addition to volumes. Our technique is exemplified on a series of tephra deposits from Mt Taranaki (New Zealand). The resulting estimates provide a comprehensive record suitable for supporting hazard models. Posterior mean volume estimates range from 0.02 to 0.26 km 3. Preliminary examination of the results suggests a size-predictable relationship.
Hosangadi, A.; Sinha, N.; Dash, S.M. )
1992-01-01
A new Eulerian particulate solver whose numerical formulation is compatible with the numerics in state-of-the-art finite-volume upwind/implicit gas dynamic computer codes is presented. The heat transfer, drag, thermodynamic, and phase-change procedures in this code are derived from earlier, well established data fits and procedures. Performance for numerous flow problems with one- and two-way coupling is quite good. The solutions are nonoscillatory and robust and conserve flux balances very well. 18 refs.
Solid Waste Operations Complex W-113: Project cost estimate. Preliminary design report. Volume IV
1995-01-01
This document contains Volume IV of the Preliminary Design Report for the Solid Waste Operations Complex W-113 which is the Project Cost Estimate and construction schedule. The estimate was developed based upon Title 1 material take-offs, budgetary equipment quotes and Raytheon historical in-house data. The W-113 project cost estimate and project construction schedule were integrated together to provide a resource loaded project network.
Ju, Lili; Tian, Li; Wang, Desheng
2009-01-01
In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.
NASA Astrophysics Data System (ADS)
Liu, Yu; Yu, Xiping
2016-09-01
A coupled phase-field and volume-of-fluid method is developed to study the sensitive behavior of water waves during breaking. The THINC model is employed to solve the volume-of-fluid function over the entire domain covered by a relatively coarse grid while the phase-field model based on Allen-Cahn equation is applied over the fine grid. A special algorithm that takes into account the sharpness of the diffuse-interface is introduced to correlate the order parameter obtained on the fine grid and the volume-of-fluid function obtained on the coarse grid. The coupled model is then applied to the study of water waves generated by moving pressures on the free surface. The deformation process of the wave crest during the initial stage of breaking is discussed in details. It is shown that there is a significant variation of the free nappe developed at the front side of the wave crest as the wave steepness differs. It is of a plunging type at large wave steepness while of a spilling type at small wave steepness. The numerical results also indicate that breaking occurs later and the duration of breaking is shorter for waves of smaller steepness and vice versa. Neglecting the capillary effect leads to wave breaking with a sharper nappe and a more dynamic plunging process. The surface tension also has an effect to prevent the formation of a free nappe at the front side of the wave crest in some cases.
Qu, Jun; Truhan, Jr., John J
2006-01-01
Point contact is often used in unidirectional pin-on-disk and reciprocating pin-on-flat sliding friction and wear tests. The slider tip could have either a spherical shape or compound curvatures (such as an ellipsoidal shape), and the worn tip usually is not flat but has unknown curvatures. Current methods for determining the wear volumes of sliders suffer from one or more limitations. For example, the gravimetric method is not able to detect small amounts of wear, and the two-dimensional wear scar size measurement is valid only for flat wear scars. More rigorous methods can be very time consuming, such as the 3D surface profiling method that involves obtaining tedious multiple surface profiles and analyzing a large set of data. In this study, a new 'single-trace' analysis is introduced to efficiently evaluate the wear volumes of non-flat worn sliders. This method requires only the measurement of the wear scar size and one trace of profiling to obtain the curvature on the wear cap. The wear volume calculation only involves closed-form algebraic equations. This single-trace method has demonstrated much higher accuracy and fewer limitations than the gravimetric method and 2D method, and has shown good agreement with the 3D method while saving significant surface profiling and data analysis time.
Rapid estimate of solid volume in large tuff cores using a gas pycnometer
Thies, C.; Geddis, A.M.; Guzman, A.G.
1996-09-01
A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.
Hu, Tingting; Zhang, Zhen
2016-01-01
Background. The traumatic epidural hematoma (tEDH) volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived) with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1) and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1) achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05). Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively) and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively). Besides, deviations of most cases in the former fell within the range of <10% (71.70% and 84.91%, respectively), whereas deviations of most cases in the latter were in the range of 10–20% and >20% (90.57% and 96.23, respectively). Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. PMID:27164045
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Reliability of respiratory tidal volume estimation by means of ambulatory inductive plethysmography.
Grossman, Paul; Spoerle, Monika; Wilhelm, Frank H
2006-01-01
Ambulatory monitoring of ventilatory parameters in everyday life, field research and clinical situations may offer new insights into respiratory functioning in health and disease. Recent technological advances that employ ambulatory inductive plethysmography could make monitoring of respiration outside the clinic and laboratory feasible. Inductive plethysmography provides a method for nonintrusive assessment of both timing (e.g. respiration rate) and volumetric parameters (e.g. tidal volume and minute ventilation), by which tidal volume is initially calibrated to direct measures of volume. Estimates of tidal volume assessed by this technique have been validated in laboratory investigations, usually examining within-individual relations to direct measures over a large range of tidal volume variation. However, the reliability of individual differences in tidal volume or other breathing parameters has not been tested under naturalistic measurement conditions using inductive plethysmography. We examined the test-retest reliability of respiration rate, tidal volume and other volumetric parameters of breathing over a period of six weeks of repeated measurements during baseline conditions and breathing exercises with 16 healthy freely moving volunteers in a Yoga course. Reliability of measurement was evaluated by calculating the average week-to-week between-subject correlation coefficients for each physiological measure. Additionally because body-mass index has been previously positively correlated to tidal volume, we also assessed this relationship as an external criterion of validity of tidal volume estimation. Regarding the latter, similar correlations to those previous studies were found (r = 0.6). Furthermore, reliability estimates were high and consistent across respiratory measures (typically r's = 0.7-0.8). These results suggest the validity of ambulatory inductive plethysmographic measurement of respiration, at least under relatively sedentary conditions
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N.J.
2014-01-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3–4 s post-injection trigger signal and at 9–12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump. PMID:24355621
NASA Astrophysics Data System (ADS)
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.
2014-02-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M; Tozer, Gillian M; Paley, Martyn N J
2014-02-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100μl to 10.000ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7μl for 100μl and 20μl for 10.000ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4s post-injection trigger signal and at 9-12s in tumor tissue. The pH of the injected pyruvate was 7.1±0.3 (mean±S.D., n=10). For small injection volumes, e.g. less than 100μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump. PMID:24355621
Peyramond, D; Tholly, F; Bertoye, A
1980-03-01
The theoretical fluid volume of 41 normal children (mean age 8 years 9 months) was estimated from anthropometric data: height, weight, wrist circumference, and body surface. The correlation between this method and the conventional methods of determining total body water using tritiated water or of extracellular fluid volume using stable bromide or bromide 82 is very good. The real fluid volumes have been measured using total body electrical impedance at low frequency (Z5 kHz) and high frequency (Z1 MHz). The correlation of these results with those obtained by anthropometry is very satisfactory (r = 0.89; p < 0,001). PMID:7469697
Optimal volume Wegner estimate for random magnetic Laplacians on Z2
NASA Astrophysics Data System (ADS)
Hasler, David; Luckett, Daniel
2013-03-01
We consider a two dimensional magnetic Schrödinger operator on a square lattice with a spatially stationary random magnetic field. We prove a Wegner estimate with optimal volume dependence. The Wegner estimate holds around the spectral edges, and it implies Hölder continuity of the integrated density of states in this region. The proof is based on the Wegner estimate obtained in Erdős and Hasler ["Wegner estimate for random magnetic Laplacians on {{Z}}^2," Ann. Henri Poincaré 12, 1719-1731 (2012)], 10.1007/s00023-012-0177-9.
Ashouri, Hazar; Orlandic, Lara; Inan, Omer T.
2016-01-01
Unobtrusive and inexpensive technologies for monitoring the cardiovascular health of heart failure (HF) patients outside the clinic can potentially improve their continuity of care by enabling therapies to be adjusted dynamically based on the changing needs of the patients. Specifically, cardiac contractility and stroke volume (SV) are two key aspects of cardiovascular health that change significantly for HF patients as their condition worsens, yet these parameters are typically measured only in hospital/clinical settings, or with implantable sensors. In this work, we demonstrate accurate measurement of cardiac contractility (based on pre-ejection period, PEP, timings) and SV changes in subjects using ballistocardiogram (BCG) signals detected via a high bandwidth force plate. The measurement is unobtrusive, as it simply requires the subject to stand still on the force plate while holding electrodes in the hands for simultaneous electrocardiogram (ECG) detection. Specifically, we aimed to assess whether the high bandwidth force plate can provide accuracy beyond what is achieved using modified weighing scales we have developed in prior studies, based on timing intervals, as well as signal-to-noise ratio (SNR) estimates. Our results indicate that the force plate BCG measurement provides more accurate timing information and allows for better estimation of PEP than the scale BCG (r2 = 0.85 vs. r2 = 0.81) during resting conditions. This correlation is stronger during recovery after exercise due to more significant changes in PEP (r2 = 0.92). The improvement in accuracy can be attributed to the wider bandwidth of the force plate. ∆SV (i.e., changes in stroke volume) estimations from the force plate BCG resulted in an average error percentage of 5.3% with a standard deviation of ±4.2% across all subjects. Finally, SNR calculations showed slightly better SNR in the force plate measurements among all subjects but the small difference confirmed that SNR is limited by
Ashouri, Hazar; Orlandic, Lara; Inan, Omer T
2016-01-01
Unobtrusive and inexpensive technologies for monitoring the cardiovascular health of heart failure (HF) patients outside the clinic can potentially improve their continuity of care by enabling therapies to be adjusted dynamically based on the changing needs of the patients. Specifically, cardiac contractility and stroke volume (SV) are two key aspects of cardiovascular health that change significantly for HF patients as their condition worsens, yet these parameters are typically measured only in hospital/clinical settings, or with implantable sensors. In this work, we demonstrate accurate measurement of cardiac contractility (based on pre-ejection period, PEP, timings) and SV changes in subjects using ballistocardiogram (BCG) signals detected via a high bandwidth force plate. The measurement is unobtrusive, as it simply requires the subject to stand still on the force plate while holding electrodes in the hands for simultaneous electrocardiogram (ECG) detection. Specifically, we aimed to assess whether the high bandwidth force plate can provide accuracy beyond what is achieved using modified weighing scales we have developed in prior studies, based on timing intervals, as well as signal-to-noise ratio (SNR) estimates. Our results indicate that the force plate BCG measurement provides more accurate timing information and allows for better estimation of PEP than the scale BCG (r² = 0.85 vs. r² = 0.81) during resting conditions. This correlation is stronger during recovery after exercise due to more significant changes in PEP (r² = 0.92). The improvement in accuracy can be attributed to the wider bandwidth of the force plate. ∆SV (i.e., changes in stroke volume) estimations from the force plate BCG resulted in an average error percentage of 5.3% with a standard deviation of ±4.2% across all subjects. Finally, SNR calculations showed slightly better SNR in the force plate measurements among all subjects but the small difference confirmed that SNR is limited by
An Approach to the Use of Depth Cameras for Weed Volume Estimation.
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-01-01
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. PMID:27347972
An Approach to the Use of Depth Cameras for Weed Volume Estimation
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-01-01
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. PMID:27347972
Estimation of convective rain volumes utilizing the are-time-integral technique
NASA Technical Reports Server (NTRS)
Johnson, L. Ronald; Smith, Paul L.
1990-01-01
Interest in the possibility of developing useful estimates of convective rainfall with Area-Time Integral (ATI) methods is increasing. The basis of the ATI technique is the observed strong correlation between rainfall volumes and ATI values. This means that rainfall can be estimated by just determining the ATI values, if previous knowledge of the relationship to rain volume is available to calibrate the technique. Examples are provided of the application of the ATI approach to gage, radar, and satellite measurements. For radar data, the degree of transferability in time and among geographical areas is examined. Recent results on transferability of the satellite ATI calculations are presented.
NASA Astrophysics Data System (ADS)
Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.
2015-12-01
Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi
NASA Technical Reports Server (NTRS)
Dewberry, B.
2000-01-01
Electrical impedance spectrometry involves measurement of the complex resistance of a load at multiple frequencies. With this information in the form of impedance magnitude and phase, or resistance and reactance, basic structure or function of the load can be estimated. The "load" targeted for measurement and estimation in this study consisted of the water-bearing tissues of the human calf. It was proposed and verified that by measuring the electrical impedance of the human calf and fitting this data to a model of fluid compartments, the lumped-model volume of intracellular and extracellular spaces could be estimated, By performing this estimation over time, the volume dynamics during application of stimuli which affect the direction of gravity can be viewed. The resulting data can form a basis for further modeling and verification of cardiovascular and compartmental modeling of fluid reactions to microgravity as well as countermeasures to the headward shift of fluid during head-down tilt or spaceflight.
Estimating stem volume and biomass of Pinus koraiensis using LiDAR data.
Kwak, Doo-Ahn; Lee, Woo-Kyun; Cho, Hyun-Kook; Lee, Seung-Ho; Son, Yowhan; Kafatos, Menas; Kim, So-Ra
2010-07-01
The objective of this study was to estimate the stem volume and biomass of individual trees using the crown geometric volume (CGV), which was extracted from small-footprint light detection and ranging (LiDAR) data. Attempts were made to analyze the stem volume and biomass of Korean Pine stands (Pinus koraiensis Sieb. et Zucc.) for three classes of tree density: low (240 N/ha), medium (370 N/ha), and high (1,340 N/ha). To delineate individual trees, extended maxima transformation and watershed segmentation of image processing methods were applied, as in one of our previous studies. As the next step, the crown base height (CBH) of individual trees has to be determined; information for this was found in the LiDAR point cloud data using k-means clustering. The LiDAR-derived CGV and stem volume can be estimated on the basis of the proportional relationship between the CGV and stem volume. As a result, low tree-density plots had the best performance for LiDAR-derived CBH, CGV, and stem volume (R (2) = 0.67, 0.57, and 0.68, respectively) and accuracy was lowest for high tree-density plots (R (2) = 0.48, 0.36, and 0.44, respectively). In the case of medium tree-density plots accuracy was R (2) = 0.51, 0.52, and 0.62, respectively. The LiDAR-derived stem biomass can be predicted from the stem volume using the wood basic density of coniferous trees (0.48 g/cm(3)), and the LiDAR-derived above-ground biomass can then be estimated from the stem volume using the biomass conversion and expansion factors (BCEF, 1.29) proposed by the Korea Forest Research Institute (KFRI). PMID:20182905
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
Seevers, P.M.; Sadowski, F.C.; Lauer, D.T.
1990-01-01
Retrospective satellite image data were evaluated for their ability to demonstrate the influence of center-pivot irrigation development in western Nebraska on spectral change and climate-related factors for the region. Periodic images of an albedo index and a normalized difference vegetation index (NDVI) were generated from calibrated Landsat multispectral scanner (MSS) data and used to monitor spectral changes associated with irrigation development from 1972 through 1986. The albedo index was not useful for monitoring irrigation development. For the NDVI, it was found that proportions of counties in irrigated agriculture, as discriminated by a threshold, were more highly correlated with reported ground estimates of irrigated agriculture than were county mean greenness values. A similar result was achieved when using coarse resolution Advanced Very High Resolution Radiometer (AVHRR) image data for estimating irrigated agriculture. The NDVI images were used to evaluate a procedure for making areal estimates of actual evapotranspiration (ET) volumes. Estimates of ET volumes for test counties, using reported ground acreages and corresponding standard crop coefficients, were correlated with the estimates of ET volume using crop coefficients scaled to NDVI values and pixel counts of crop areas. These county estimates were made under the assumption that soil water availability was unlimited. For nonirrigated vegetation, this may result in over-estimation of ET volumes. Ground information regarding crop types and acreages are required to derive the NDVI scaling factor. Potential ET, estimated with the Jensen-Haise model, is common to both methods. These results, achieved with both MSS and AVHRR data, show promise for providing climatologically important land surface information for regional and global climate models. ?? 1990 Kluwer Academic Publishers.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-06
... California; Use of Estimated Trade Demand to Compute Volume Regulation Percentages AGENCY: Agricultural... estimated trade demand figure to compute volume regulation percentages for 2010- 11 crop Natural (sun-dried... the date of the entry of the ruling. This proposal invites comments on using an estimated trade...
Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors
NASA Astrophysics Data System (ADS)
Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood
2000-08-01
Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.
NASA Astrophysics Data System (ADS)
De Vuyst, Florian
2004-01-01
This exploratory work tries to present first results of a novel approach for the numerical approximation of solutions of hyperbolic systems of conservation laws. The objective is to define stable and "reasonably" accurate numerical schemes while being free from any upwind process and from any computation of derivatives or mean Jacobian matrices. That means that we only want to perform flux evaluations. This would be useful for "complicated" systems like those of two-phase models where solutions of Riemann problems are hard, see impossible to compute. For Riemann or Roe-like solvers, each fluid model needs the particular computation of the Jacobian matrix of the flux and the hyperbolicity property which can be conditional for some of these models makes the matrices be not R-diagonalizable everywhere in the admissible state space. In this paper, we rather propose some numerical schemes where the stability is obtained using convexity considerations. A certain rate of accuracy is also expected. For that, we propose to build numerical hybrid fluxes that are convex combinations of the second-order Lax-Wendroff scheme flux and the first-order modified Lax-Friedrichs scheme flux with an "optimal" combination rate that ensures both minimal numerical dissipation and good accuracy. The resulting scheme is a central scheme-like method. We will also need and propose a definition of local dissipation by convexity for hyperbolic or elliptic-hyperbolic systems. This convexity argument allows us to overcome the difficulty of nonexistence of classical entropy-flux pairs for certain systems. We emphasize the systematic feature of the method which can be fastly implemented or adapted to any kind of systems, with general analytical or data-tabulated equations of state. The numerical results presented in the paper are not superior to many existing state-of-the-art numerical methods for conservation laws such as ENO, MUSCL or central scheme of Tadmor and coworkers. The interest is rather
Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula
Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.
2003-01-01
Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).
F. SOUTO; A HEGER
2001-02-01
Aqueous homogeneous solution reactors have been proposed for the production of medical isotopes. However, the reactivity effects of fuel solution volume change, due to formation of radiolytic gas bubbles and thermal expansion, have to be mitigated to allow steady-state operation of solution reactors. The results of the free run experiments analyzed indicate that the proposed model to estimate the void volume due to radiolytic gas bubbles and thermal expansion in solution reactors can accurately describe the observed behavior during the experiments. This void volume due to radiolytic gas bubbles and fuel solution thermal expansion can then be used in the investigation of reactivity effects in fissile solutions. In addition, these experiments confirm that the radiolytic gas bubbles are formed at a higher temperature than the fuel solution temperature. These experiments also indicate that the mole-weighted average for the radiolytic gas bubbles in uranyl fluoride solutions is about 1 {micro}m. Finally, it should be noted that another model, currently under development, would simulate the power behavior during the transient given the initial fuel solution level and density. The model is based on Monte Carlo simulation with the MCNP computer code [Briesmeister, 1997] to obtain the reactor reactivity as a function of the fuel solution density, which, in turn, changes due to thermal expansion and radiolytic gas bubble formation.
McQuaid, Sarah J; Kijewski, Marie Foley; Moore, Stephen C
2013-01-01
A new method of compensating for tissue-fraction and count-spillover effects, which requires tissue segmentation only within a small volume surrounding the primary lesion of interest, was evaluated for SPECT imaging. Tissue-activity concentration estimates are obtained by fitting the measured projection data to a statistical model of the segmented tissue projections. Multiple realizations of two simulated human-torso phantoms, each containing 20 spherical “tumours”, 1.6 cm in diameter, with tumour-to-background ratios of 8:1 and 4:1, were simulated. Estimates of tumour- and background-activity concentration values for homogeneous as well as inhomogeneous tissue activities were compared to standard SUV metrics on the basis of accuracy and precision. For perfectly registered, high-contrast, superficial lesions in a homogeneous background without scatter, the method yielded accurate (<0.4% bias) and precise (<6.1%) recovery of the simulated activity values, significantly outperforming the SUV metrics. Tissue inhomogeneities, greater tumour depth and lower contrast ratios degraded precision (up to 11.7%), but the estimates remained almost unbiased. The method was comparable in accuracy but more precise than a well-established matrix inversion approach, even when errors in tumor size and position were introduced to simulate moderate inaccuracies in segmentation and image registration. Photon scatter in the object did not significantly affect the accuracy or precision of the estimates. PMID:22241591
Southekal, Sudeepti; McQuaid, Sarah J; Kijewski, Marie Foley; Moore, Stephen C
2012-02-01
A new method of compensating for tissue-fraction and count-spillover effects, which require tissue segmentation only within a small volume surrounding the primary lesion of interest, was evaluated for SPECT imaging. Tissue-activity concentration estimates are obtained by fitting the measured projection data to a statistical model of the segmented tissue projections. Multiple realizations of two simulated human-torso phantoms, each containing 20 spherical 'tumours', 1.6 cm in diameter, with tumour-to-background ratios of 8:1 and 4:1, were simulated. Estimates of tumour- and background-activity concentration values for homogeneous as well as inhomogeneous tissue activities were compared to the standard uptake value (SUV) metrics on the basis of accuracy and precision. For perfectly registered, high-contrast, superficial lesions in a homogeneous background without scatter, the method yielded accurate (<0.4% bias) and precise (<6.1%) recovery of the simulated activity values, significantly outperforming the SUV metrics. Tissue inhomogeneities, greater tumour depth and lower contrast ratios degraded precision (up to 11.7%), but the estimates remained almost unbiased. The method was comparable in accuracy but more precise than a well-established matrix inversion approach, even when errors in tumour size and position were introduced to simulate moderate inaccuracies in segmentation and image registration. Photon scatter in the object did not significantly affect the accuracy or precision of the estimates. PMID:22241591
A comparison of gradient estimation methods for volume rendering on unstructured meshes.
Correa, Carlos D; Hero, Robert; Ma, Kwan-Liu
2011-03-01
This paper presents a study of gradient estimation methods for rendering unstructured-mesh volume data. Gradient estimation is necessary for rendering shaded isosurfaces and specular highlights, which provide important cues for shape and depth. Gradient estimation has been widely studied and deployed for regular-grid volume data to achieve local illumination effects, but has been, otherwise, for unstructured-mesh data. As a result, most of the unstructured-mesh volume visualizations made so far were unlit. In this paper, we present a comprehensive study of gradient estimation methods for unstructured meshes with respect to their cost and performance. Through a number of benchmarks, we discuss the effects of mesh quality and scalar function complexity in the accuracy of the reconstruction, and their impact in lighting-enabled volume rendering. Based on our study, we also propose two heuristic improvements to the gradient reconstruction process. The first heuristic improves the rendering quality with a hybrid algorithm that combines the results of the multiple reconstruction methods, based on the properties of a given mesh. The second heuristic improves the efficiency of its GPU implementation, by restricting the computation of the gradient on a fixed-size local neighborhood. PMID:21233515
Cost and price estimate of Brayton and Stirling engines in selected production volumes
NASA Technical Reports Server (NTRS)
Fortgang, H. R.; Mayers, H. F.
1980-01-01
The methods used to determine the production costs and required selling price of Brayton and Stirling engines modified for use in solar power conversion units are presented. Each engine part, component and assembly was examined and evaluated to determine the costs of its material and the method of manufacture based on specific annual production volumes. Cost estimates are presented for both the Stirling and Brayton engines in annual production volumes of 1,000, 25,000, 100,000 and 400,000. At annual production volumes above 50,000 units, the costs of both engines are similar, although the Stirling engine costs are somewhat lower. It is concluded that modifications to both the Brayton and Stirling engine designs could reduce the estimated costs.
Gross-merchantable timber volume estimation using an airborne lidar system
NASA Technical Reports Server (NTRS)
Maclean, G. A.; Krabill, W. B.
1986-01-01
A preliminary study to determine the utility of an airborne laser as a tool for use by forest managers to estimate gross-merchantable timber volume was conducted near the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center, Wallops Flight Facility utilizing the NASA Airborne Oceanographic Lidar (AOL) system. Measured timber volume was regressed against the cross-sectional area of an AOL-generated profile of forest at the same location. The AOL profile area was found to be a very significant variable in the estimation of gross-merchantable timber volume. Significant improvements were obtained when the data were stratified by species. The overall R-squared value obtained was 0.921 with the regression significant at the one percent level.
NASA Astrophysics Data System (ADS)
Kim, K. M.
2016-06-01
Traditional field methods for measuring tree heights are often too costly and time consuming. An alternative remote sensing approach is to measure tree heights from digital stereo photographs which is more practical for forest managers and less expensive than LiDAR or synthetic aperture radar. This work proposes an estimation of stand height and forest volume(m3/ha) using normalized digital surface model (nDSM) from high resolution stereo photography (25cm resolution) and forest type map. The study area was located in Mt. Maehwa model forest in Hong Chun-Gun, South Korea. The forest type map has four attributes such as major species, age class, DBH class and crown density class by stand. Overlapping aerial photos were taken in September 2013 and digital surface model (DSM) was created by photogrammetric methods(aerial triangulation, digital image matching). Then, digital terrain model (DTM) was created by filtering DSM and subtracted DTM from DSM pixel by pixel, resulting in nDSM which represents object heights (buildings, trees, etc.). Two independent variables from nDSM were used to estimate forest stand volume: crown density (%) and stand height (m). First, crown density was calculated using canopy segmentation method considering live crown ratio. Next, stand height was produced by averaging individual tree heights in a stand using Esri's ArcGIS and the USDA Forest Service's FUSION software. Finally, stand volume was estimated and mapped using aerial photo stand volume equations by species which have two independent variables, crown density and stand height. South Korea has a historical imagery archive which can show forest change in 40 years of successful forest rehabilitation. For a future study, forest volume change map (1970s-present) will be produced using this stand volume estimation method and a historical imagery archive.
Estimation of single cell volume from 3D confocal images using automatic data processing
NASA Astrophysics Data System (ADS)
Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.
2012-06-01
Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.
NASA Astrophysics Data System (ADS)
Liang, Xian-hua; Sun, Wei-dong
2011-06-01
Inventory checking is one of the most significant parts for grain reserves, and plays a very important role on the macro-control of food and food security. Simple, fast and accurate method to obtain internal structure information and further to estimate the volume of the grain storage is needed. Here in our developed system, a special designed multi-site laser scanning system is used to acquire the range data clouds of the internal structure of the grain storage. However, due to the seriously uneven distribution of the range data, this data should firstly be preprocessed by an adaptive re-sampling method to reduce the data redundancy as well as noise. Then the range data is segmented and useful features, such as plane and cylinder information, are extracted. With these features a coarse registration between all of these single-site range data is done, and then an Iterative Closest Point (ICP) algorithm is carried out to achieve fine registration. Taking advantage of the structure of the grain storage being well defined and the types of them are limited, a fast automatic registration method based on the priori model is proposed to register the multi-sites range data more efficiently. Then after the integration of the multi-sites range data, the grain surface is finally reconstructed by a delaunay based algorithm and the grain volume is estimated by a numerical integration method. This proposed new method has been applied to two common types of grain storage, and experimental results shown this method is more effective and accurate, and it can also avoids the cumulative effect of errors when registering the overlapped area pair-wisely.
Rahimi, A.F.; Kato, K.; Stadlin, W.; Ansari, S.H. |; Brandwajn, V.; Bose, A.
1995-04-01
The single largest source of error in state estimation, an inadequate external system model, affects the usefulness of energy management system (EMS) applications. EPRI has developed comprehensive guidelines to help utilities enhance external system modeling for state estimation and has demonstrated use of the guidelines on three host utility systems without data exchange. These guidelines address network topology, analog measurement, inter-utility data exchange, and application procedures and recommendations. They include specific guidelines for utility types and network analysis applications, and validate the Normalized Level of Impact (NLI) as a key index for external system modeling. This report provides valuable insight to the veteran, as well as first-time state estimator implementors and users. A useful reference source, the extensive guidelines supply answers and helpful advice, as well as recommendations for future work. Volume 1 contains external system modeling guidelines, and Volume 2 is a summary of responses to the utility and EMS supplier survey questionnaire used in this project.
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Majid, Walid
2009-01-01
At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.
A progressive black top hat transformation algorithm for estimating valley volumes on Mars
NASA Astrophysics Data System (ADS)
Luo, Wei; Pingel, Thomas; Heo, Joon; Howard, Alan; Jung, Jaehoon
2015-02-01
The depth of valley incision and valley volume are important parameters in understanding the geologic history of early Mars, because they are related to the amount sediments eroded and the quantity of water needed to create the valley networks (VNs). With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. Previous studies typically use a single window size for extracting the valley features and a single threshold value for removing noise, resulting in finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to remove above-ground features to obtain bare-earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the window size is progressively increased to extract valleys of different orders. In addition, a slope factor is introduced so that the noise threshold can be automatically adjusted for windows with different sizes. Independently derived VN lines were used to select mask polygons that spatially overlap the VN lines. Volume is calculated as the sum of valley depth within the selected mask multiplied by cell area. Application of the PBTH to a simulated landform (for which the amount of erosion is known) achieved an overall relative accuracy of 96%, in comparison with only 78% for BTH. Application of PBTH to Ma'adim Vallies on Mars not only produced total volume estimates consistent with previous studies, but also revealed the detailed spatial distribution of valley depth. The highly automated PBTH algorithm shows great promise for estimating the volume of VN on Mars on global scale, which is important for understanding its early hydrologic cycle.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
Mello, Beatriz; Schrago, Carlos G
2014-01-01
Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333
Belzer, D.B. ); Serot, D.E. ); Kellogg, M.A. )
1991-03-01
Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner to allow evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key nonmanufacturing sectors. This volume lays out the general concepts and methods used to develop the emergency operating estimates. The historical analysis of capacity extends from 1974 through 1986. Some nonmanufacturing industries are included. In addition to mining and utilities, key industries in transportation, communication, and services were analyzed. Physical capacity and efficiency of production were measured. 3 refs., 2 figs., 12 tabs. (JF)
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.
Baseline estimate of the retained gas volume in Tank 241-C-106
Stewart, C.W.; Chen, G.
1998-06-01
This report presents the results of a study of the retained gas volume in Hanford Tank 241-C-106 (C-106) using the barometric pressure effect method. This estimate is required to establish the baseline conditions for sluicing the waste from C-106 into AY-102, scheduled to begin in the fall of 1998. The barometric pressure effect model is described, and the data reduction and detrending techniques are detailed. Based on the response of the waste level to the larger barometric pressure swings that occurred between October 27, 1997, and March 4, 1998, the best estimate and conservative (99% confidence) retained gas volumes in C-106 are 24 scm (840 scf) and 50 scm (1,770 scf), respectively. This is equivalent to average void fractions of 0.025 and 0.053, respectively.
Wei, Hua-Liang; Zheng, Ying; Pan, Yi; Coca, Daniel; Li, Liang-Min; Mayhew, J E W; Billings, Stephen A
2009-06-01
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV. PMID:19174333
Monitoring and Estimation of Reservoir Water Volume using Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Bhat, Nagaraj; Gouda, Krushna Chandra; Vh, Manumohan; Bhat, Reshma
2015-04-01
Water Reservoirs are the main source of water supply for many settlements as well as power generation. So the water volume and extent of the reservoirs needs to be monitored at regular time intervals for efficient usage as well as to avoid disasters like extreme rainfall events and flood etc. Generally the reservoirs are remotely located so it is difficult to well monitor the water volume and extent. But with growing of Remote sensing and GIS in HPC environment and modeling techniques it is possible to monitor, estimate even predict the reservoir water volumes in advance by using the numerical modeling and satellite Remote sensing data. In this work the monitoring and estimation of the volume of water in the Krishna Raja Sagar(KRS) water reservoir in Karnataka state of India. In this work multispectral images from different sources like Landsat TRS and Digital Elevation Model(DEM) using IRS LISS III (IRS- Indian Remote Sensing, LISS- Linear Imaging Self-Scanning) and ASTER(Advanced Spaceborne Thermal Emission and Reflectance Radiometer) are being used .The methodology involves GIS and image processing techniques such as mosaicing and georeferencing the raw data from satellite, identifying the reservoir water level, segmentation of waterbody using the pixel level analysis. Calculating area and depth per each pixel, the total water volume calculations are done based on the empirical model developed using the past validated data. The water spreaded area calculated by using water indexing is converted in to vector polygon using ArcGIS tools. Water volume obtained by this method is compared with ground based observed values of a reservoir and the comparison well matches for 80% of cases.
Automatic atlas-based volume estimation of human brain regions from MR images
Andreasen, N.C.; Rajarethinam, R.; Cizadlo, T.; Arndt, S.
1996-01-01
MRI offers many opportunities for noninvasive in vivo measurement of structure-function relationships in the human brain. Although automated methods are now available for whole-brain measurements, an efficient and valid automatic method for volume estimation of subregions such as the frontal or temporal lobes is still needed. We adapted the Talairach atlas to the study of brain subregions. We supplemented the atlas with additional boxes to include the cerebellum. We assigned all the boxes to 1 of 12 regions of interest (ROIs) (frontal, parietal, temporal, and occipital lobes, cerebellum, and subcortical regions on right and left sides of the brain).Using T1-weighted MR scans collected with an SPGR sequence (slice thickness = 1.5 mm), we manually traced these ROIs and produced volume estimates. We then transformed the scans into Talairach space and compared the volumes produced by the two methods ({open_quotes}traced{close_quotes} versus {open_quotes}automatic{close_quotes}). The traced measurements were considered to be the {open_quotes}gold standard{close_quotes} against which the automatic measurements were compared. The automatic method was found to produce measurements that were nearly identical to the traced method. We compared absolute measurements of volume produced by the two methods, as well as the sensitivity and specificity of the automatic method. We also compared the measurements of cerebral blood flow obtained through [{sup 15}O]H{sub 2}O PET studies in a sample of nine subjects. Absolute measurements of volume produced by the two methods were very similar, and the sensitivity and specificity of the automatic method were found to be high for all regions. The flow values were also found to be very similar by both methods. The automatic atlas-based method for measuring the volume of brain subregions produces results that are similar to manual techniques. 39 refs., 4 figs., 3 tabs.
A Progressive Black Top Hat Transformation Algorithm for Estimating Valley Volumes from DEM Data
NASA Astrophysics Data System (ADS)
Luo, W.; Pingel, T.; Heo, J.; Howard, A. D.
2013-12-01
The amount of valley incision and valley volume are important parameters in geomorphology and hydrology research, because they are related to the amount erosion (and thus the volume of sediments) and the amount of water needed to create the valley. This is not only the case for terrestrial research but also for planetary research as such figuring out how much water was on Mars. With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. However, previous studies typically use one single structuring element size for extracting the valley feature and one single threshold value for removing noise, resulting in some finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to separate above ground features and bare earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the structuring elements size is progressively increased to extract valleys of different orders. In addition, a slope based threshold was introduced to automatically adjust the threshold values for structuring elements with different sizes. Connectivity and shape parameters of the masked regions were used to keep the long linear valleys while removing other smaller non-connected regions. Preliminary application of the PBTH to Grand Canyon and two sites on Mars has produced promising results. More testing and fine-tuning is in progress. The ultimate goal of the project is to apply the algorithm to estimate the volume of valley networks on Mars and the volume of water needed to form the valleys we observe today and thus infer the nature of the hydrologic cycle on early Mars. The project is funded by NASA's Mars Data Analysis program.
How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates
ERIC Educational Resources Information Center
Otterbach, Steffen; Sousa-Poza, Alfonso
2010-01-01
This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…
Marshall, Brian D; Neymark, Leonid A; Peterman, Zell E
2003-01-01
Low-temperature calcite and opal record the past seepage of water into open fractures and lithophysal cavities in the unsaturated zone at Yucca Mountain, Nevada, site of a proposed high-level radioactive waste repository. Systematic measurements of calcite and opal coatings in the Exploratory Studies Facility (ESF) tunnel at the proposed repository horizon are used to estimate the volume of calcite at each site of calcite and/or opal deposition. By estimating the volume of water required to precipitate the measured volumes of calcite in the unsaturated zone, seepage rates of 0.005 to 5 liters/year (l/year) are calculated at the median and 95th percentile of the measured volumes, respectively. These seepage rates are at the low end of the range of seepage rates from recent performance assessment (PA) calculations, confirming the conservative nature of the performance assessment. However, the distribution of the calcite and opal coatings indicate that a much larger fraction of the potential waste packages would be contacted by this seepage than is calculated in the performance assessment. PMID:12714293
Marshall, B.D.; Neymark, L.A.; Peterman, Z.E.
2003-01-01
Low-temperature calcite and opal record the past seepage of water into open fractures and lithophysal cavities in the unsaturated zone at Yucca Mountain, Nevada, site of a proposed high-level radioactive waste repository. Systematic measurements of calcite and opal coatings in the Exploratory Studies Facility (ESF) tunnel at the proposed repository horizon are used to estimate the volume of calcite at each site of calcite and/or opal deposition. By estimating the volume of water required to precipitate the measured volumes of calcite in the unsaturated zone, seepage rates of 0.005 to 5 liters/year (l/year) are calculated at the median and 95th percentile of the measured volumes, respectively. These seepage rates are at the low end of the range of seepage rates from recent performance assessment (PA) calculations, confirming the conservative nature of the performance assessment. However, the distribution of the calcite and opal coatings indicate that a much larger fraction of the potential waste packages would be contacted by this seepage than is calculated in the performance assessment.
Volume and Mass Estimation of Three-Phase High Power Transformers for Space Applications
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.
2004-01-01
Spacecraft historically have had sub-1kW(sub e), electrical requirements for GN&C, science, and communications: Galileo at 600W(sub e), and Cassini at 900W(sub e), for example. Because most missions have had the same order of magnitude power requirements, the Power Distribution Systems (PDS) use existing, space-qualified technology and are DC. As science payload and mission duration requirements increase, however, the required electrical power increases. Subsequently, this requires a change from a passive energy conversion (solar arrays and batteries) to dynamic (alternator, solar dynamic, etc.), because dynamic conversion has higher thermal and conversion efficiencies, has higher power densities, and scales more readily to higher power levels. Furthermore, increased power requirements and physical distribution lengths are best served with high-voltage, multi-phase AC to maintain distribution efficiency and minimize voltage drops. The generated AC-voltage must be stepped-up (or down) to interface with various subsystems or electrical hardware. Part of the trade-space design for AC distribution systems is volume and mass estimation of high-power transformers. The volume and mass are functions of the power rating, operating frequency, the ambient and allowable temperature rise, the types and amount of heat transfer available, the core material and shape, the required flux density in a core, the maximum current density, etc. McLyman has tabulated the performance of a number of transformers cores and derived a "cookbook" methodology to determine the volume of transformers, whereas Schawrze had derived an empirical method to estimate the mass of single-phase transformers. Based on the work of McLyman and Schwarze, it is the intent herein to derive an empirical solution to the volume and mass estimation of three-phase, laminated EI-core power transformers, having radiated and conducted heat transfer mechanisms available. Estimation of the mounting hardware, connectors
Use of PCA in Tephra Correlation and Importance of Thickness Values for Volume Estimation
NASA Astrophysics Data System (ADS)
Bursik, M. I.; Pouget, S.; Cortes, J. A.
2014-12-01
Discontinuous tephra layers were discovered at Burney Spring Mountain, northern California, USA. Stratigraphic relationships suggest that they are two distinct primary fall deposits. Geochemistry of the tephras from electron probe microanalysis was compared with geochemistry of known layers found in the region, to test for potential correlations, first using traditional binary plots and standard similarity coefficients. Then, using principal component analysis, we were able to bound our uncertainty in the correlation of the two tephra layers. After removal of outliers, within the 95% prediction interval, we can say that the lower tephra layer is likely the Rockland tephra, aged 565-610 ka, and the upper layer is likely from Mt Mazama, the Trego Hot Springs tephra, aged ~ 29 ka, based on the majority of the glass shards. The results were used to estimate the erupted volumes of the two deposits, by using the new thickness measurement together with others in the literature. The volume of the Rockland tephra was estimated to be 150-250 cu km, and of the Trego Hot Springs tephra, 10-50 cu km. It was found, however, that reported thickness measurements are not always detailed, and some reworked material is often included in measurements. As a result, the erupted volumes may be greatly over-estimated. Checking measured thickness against expected (model) thickness as a function of isopach area can provide useful information about potential redeposition. This would avoid an over-estimation of primary fall deposit volume, by giving an indication about locations where the original description should be studied in detailed and new thickness measurements made, if possible.
Profiling river surface velocities and volume flow estimation with bistatic UHF RiverSonde radar
Barrick, D.; Teague, C.; Lilleboe, P.; Cheng, R.; Gartner, J.
2003-01-01
From the velocity profiles across the river, estimates of total volume flow for the four methods were calculated based on a knowledge of the bottom depth vs position across the river. It was found that the flow comparisons for the American River were much closer, within 2% of each other among all of the methods. Sources of positional biases and anomalies in the RiverSonde measurement patterns along the river were identified and discussed.
ICESat Estimates of Elevation and Volume Changes of Greenland Ice Caps
NASA Astrophysics Data System (ADS)
Robbins, J. W.; Zwally, J.; Yi, D.; Li, J.; Saba, J. L.
2012-12-01
ICESat Laser Altimetry acquired over the period 2003-2008 has been processed to provide estimates of changes in elevation for each aligned laser footprint. These are then interpolated, geographically, yielding estimates of volume change on nearly two dozen peripheral ice caps, mostly located in northern Greenland. Definition of ice cap edges are provided by the Greenland Mapping Project 90m, high-resolution, ice mask (GIMP). The results provide a geometric measure of sub-decadal ice cap gain or loss, with the outcome being that more ice caps are losing volume than gaining. Ice caps ranging in size from 200 to 7500 square km have been considered. Over the five-years, ice cap volume changes range from -1.586 cubic km for the Ikke Opmålt cap (2965.1 sq. km areal extent) to +0.582 cubic km on the Kronprins Christian Land cap (7414.6 sq. km). The corresponding averaged rates of elevation change range from -0.535 m/yr to +0.079 m/yr, respectively. Estimates of elevation changes from variations in the rate of firn compaction are also applied. Additionally, examination of time histories of ICESat elevation profiles crossing select ice caps reveal seasonal losses and gains.
NASA Astrophysics Data System (ADS)
Wu, Shunguang; Hong, Lang
2008-04-01
A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.
Sherwood, J.M.
1986-01-01
Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple
NASA Astrophysics Data System (ADS)
Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim
2011-09-01
Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman- McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques.
Generating human reliability estimates using expert judgment. Volume 2. Appendices. [PWR; BWR
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results.
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
Detection and Volume Estimation of Large Landslides by Using Multi-temporal Remote Sensing Data
NASA Astrophysics Data System (ADS)
Hsieh, Yu-chung; Hou, Chin-Shyong; Chan, Yu-Chang; Hu, Jyr-Ching; Fei, Li-Yuan; Chen, Hung-Jen; Chiu, Cheng-Lung
2014-05-01
Large landslides are frequently triggered by strong earthquakes and heavy rainfalls in the mountainous areas of Taiwan. The heavy rainfall brought by the Typhoon Morakot has triggered a large amount of landslides. The most unfortunate case occurred in the Xiaolin village, which was totally demolished by a catastrophic landslide in less than a minute. Continued and detailed study of the characteristics of large landslides is urgently needed to mitigate loss of lives and properties in the future. Traditionally known techniques cannot effectively extract landslide parameters, such as depth, amount and volume, which are essential in all the phases of landslide assessment. In addition, it is very important to record the changes of landslide deposits after the landslide events as accurately as possible to better understand the landslide erosion process. The acquisition of digital elevation models (DEMs) is considered necessary for achieving accurate, effective and quantitative landslide assessments. A new technique is presented in this study for quickly assessing extensive areas of large landslides. The technique uses DEMs extracted from several remote sensing approaches, including aerial photogrammetry, airborne LiDAR and UAV photogrammetry. We chose a large landslide event that occurred after Typhoon Sinlaku in Meiyuan the mount, central Taiwan in 2008. We collected and processed six data sets, including aerial photos, airborne LiDAR data and UAVphotos, at different times from 2005 to 2013. Our analyses show the landslide volume being 17.14 × 106 cubic meters, deposition volume being 12.75 × 106 cubic meters, and about 4.38 × 106 cubic meters being washed out of the region. Residual deposition ratio of this area is about 74% in 2008; while, after a few years, the residual deposition ratio is down below 50%. We also analyzed riverbed changes and sediment transfer patterns from 2005 to 2013 by multi-temporal remote sensing data with desirable accuracy. The developed
Clisby, Nathan
2010-02-01
We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773
NASA Astrophysics Data System (ADS)
Kropáček, J.; Neckel, N.; Bauder, A.
2013-07-01
Worldwide estimation of recent changes in glacier volume is challenging, but becomes more feasible with the help of present and future remote sensing missions. NASA's Ice Cloud and Elevation Satellite (ICESat) mission provides accurate elevation estimates derived from the two way travel time of the emitted laser pulse. In this study two different methods were employed for derivation of surface elevation changes from ICESat records on example of the Aletsch Glacier. A statistical approach relies on elevation differences of ICESat points to a reference DEM while an analytical approach compares spatially similar ICESat tracks. Using the statistical approach, in the upper and lower parts of the ablation area, the surface lowering was found to be from -2.1 ± 0.15 m yr-1 to -2.6 ± 0.10 m yr-1 and from -3.3 ± 0.36 m yr-1 to -5.3 ± 0.39 m yr-1, respectively, depending on the DEM used. Employing the analytical method, the surface lowering in the upper part of the ablation area was estimated as -2.5 ± 1.3 m yr-1 between 2006 and 2009. In the accumulation area both methods revealed no significant trend. The trend in surface lowering derived by the statistical method allows an estimation of the mean mass balance in the period 2003-2009 assuming constant ice density and a linear change of glacier surface lowering with altitude in the ablation area. The resulting mass balance was validated by a comparison to another geodetic approach based on the subtraction of two DEMs for the years 2000 and 2009. We conclude that ICESat data is a valid source of information on surface elevation changes and on mass balance of mountain glaciers.
Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E
2014-03-01
Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for
Liao, Hsiao-Wei; Lin, Shu-Wen; Chen, Guan-Yuan; Kuo, Ching-Hua
2016-06-21
Dried blood spots (DBSs) have had a long history in disease screening in newborns but have gained attention in recent years in the medical care of adults because of the growing importance of personalized medicine. DBSs have several advantages, such as easy transportation, cost-effectiveness, and minimally invasive biological sampling. There are two strategies to process DBS samples. One method takes a fixed diameter of subsample, and another requires the extraction of the whole spot. The whole-spot extraction method is less affected by hematocrit-caused errors, but it requires calibration of the blood volume. We propose a novel strategy using a postcolumn infused-internal standard (PCI-IS) method with liquid chromatography-electrospray ionization mass spectrometry (LC-ESI-MS) for estimating and correcting blood volume variations on the DBS cards. By using PCI-IS to measure the extent of ion suppression in the first ion suppression zone in the chromatogram, the blood volume on the DBS cards can be calculated and further calibrated. We used reference blood samples with different volumes (5 to 25 μL) to construct a calibration curve between the blood volume and the extent of ion suppression. The calibration curve was used to estimate the blood volume on the DBS cards collected from 6 volunteers, with 5 designated volumes from each volunteer. The estimation accuracy of the PCI-IS method was between 74.5% and 120.3%. The validated PCI-IS method was used to estimate and calibrate blood volume variation and also to quantify the voriconazole concentration for 26 patients undergoing voriconazole therapy. A high correlation was found for the quantification results between the DBS samples and the conventionally used plasma samples (r = 0.97). The PCI-IS method was demonstrated to be a simple and accurate method for estimating and calibrating the blood volume variation on DBS cards, which greatly facilitates using the DBS method for therapeutic drug monitoring (TDM) for
Cost and price estimate of Brayton and Stirling engines in selected production volumes
Fortgang, H.R.; Mayers, H.F.
1980-05-31
This report details the methods used to determine the production costs and required selling price of Brayton and Stirling engines modified for use in solar power conversion units. The Brayton engine, designed by Garrett AiResearch Manufacturing Company, was upgraded to a 20 kW design. The Stirling 30 kW engine was designed by United Stirling of Sweden for non-solar applications. Each engine part, component and assembly was examined and evaluated to determine the costs of its material and the method of manufacture based on specific annual production volumes. Cost estimates are presented for both the Stirling and Brayton engines in annual production volumes of 1000, 25,000, 100,000, and 400,000. At annual production volumes above 50,000 units, the costs of both engines are similar, although the Stirling engine costs are somewhat lower. It was concluded that modifications to both the Brayton and Stirling engine designs could reduce the estimated costs.
NASA Astrophysics Data System (ADS)
Martínez-Sánchez, J.; Puente, I.; GonzálezJorge, H.; Riveiro, B.; Arias, P.
2016-06-01
When ground conditions are weak, particularly in free formed tunnel linings or retaining walls, sprayed concrete can be applied on the exposed surfaces immediately after excavation for shotcreting rock outcrops. In these situations, shotcrete is normally applied conjointly with rock bolts and mesh, thereby supporting the loose material that causes many of the small ground falls. On the other hand, contractors want to determine the thickness and volume of sprayed concrete for both technical and economic reasons: to guarantee their structural strength but also, to not deliver excess material that they will not be paid for. In this paper, we first introduce a terrestrial LiDAR-based method for the automatic detection of rock bolts, as typically used in anchored retaining walls. These ground support elements are segmented based on their geometry and they will serve as control points for the co-registration of two successive scans, before and after shotcreting. Then we compare both point clouds to estimate the sprayed concrete thickness and the expending volume on the wall. This novel methodology is demonstrated on repeated scan data from a retaining wall in the city of Vigo (Spain), resulting in a rock bolts detection rate of 91%, that permits to obtain a detailed information of the thickness and calculate a total volume of 3597 litres of concrete. These results have verified the effectiveness of the developed approach by increasing productivity and improving previous empirical proposals for real time thickness estimation.
A statistical method to estimate outflow volume in case of levee breach due to overtopping
NASA Astrophysics Data System (ADS)
Brandimarte, Luigia; Martina, Mario; Dottori, Francesco; Mazzoleni, Maurizio
2015-04-01
The aim of this study is to propose a statistical method to assess the outflowing water volume through a levee breach, due to overtopping, in case of three different types of grass cover quality. The first step in the proposed methodology is the definition of the reliability function, a the relation between loading and resistance conditions on the levee system, in case of overtopping. Secondly, the fragility curve, which relates the probability of failure with loading condition over the levee system, is estimated having defined the stochastic variables in the reliability function. Thus, different fragility curves are assessed in case of different scenarios of grass cover quality. Then, a levee breach model is implemented and combined with a 1D hydrodynamic model in order to assess the outflow hydrograph given the water level in the main channel and stochastic values of the breach width. Finally, the water volume is estimated as a combination of the probability density function of the breach width and levee failure. The case study is located in the in 98km-braided reach of Po River, Italy, between the cross-sections of Cremona and Borgoforte. The analysis showed how different counter measures, different grass cover quality in this case, can reduce the probability of failure of the levee system. In particular, for a given values of breach width good levee cover qualities can significantly reduce the outflowing water volume, compared to bad cover qualities, inducing a consequent lower flood risk within the flood-prone area.
Estimated limits of IMRT dose escalation using varied planning target volume margins
NASA Astrophysics Data System (ADS)
Goulet, Christopher C.; Herman, Michael G.; Hillman, David W.; Davis, Brian J.
2008-07-01
To estimate the limits of dose escalation for prostate cancer as a function of planning target volume (PTV) margins, the maximum achievable dose (MAD) was determined through iterative plan optimizations from data sets of 18 patients until the dose constraints for rectum, bladder and PTV could no longer be met. PTV margins of 10, 5 and 3 mm yielded a mean MAD of 83.0 Gy (range, 73.8-108.0 Gy), 113.1 Gy (range, 90.0-151.2 Gy) and 135.9 Gy (range, 102.6-189.0 Gy), respectively. All comparisons of MAD among margin groups were statistically significant (P < 0.001). Comparison of prostate volumes of 30-50 mL (n = 8) with volumes of 51-70 mL (n = 7) and 71-105 mL (n = 3) showed an inverse relationship with MAD. Decreases in PTV margin significantly decreased the PTV overlap of the rectum (P < 0.001 for all margin comparisons). With decreases in the PTV margin and maintenance of identical dose constraints, doses well above those currently prescribed for treatment of localized prostate cancer appear feasible. However, the dose escalation suggested by these findings is a theoretical estimate, and additional dose constraints will likely be necessary to limit toxicity to normal tissue.
Astrometric telescope facility. Preliminary systems definition study. Volume 3: Cost estimate
NASA Technical Reports Server (NTRS)
Sobeck, Charlie (Editor)
1987-01-01
The results of the Astrometric Telescope Facility (ATF) Preliminary System Definition Study conducted in the period between March and September 1986 are described. The main body of the report consists primarily of the charts presented at the study final review which was held at NASA Ames Research Center on July 30 and 31, 1986. The charts have been revised to reflect the results of that review. Explanations for the charts are provided on the adjoining pages where required. Note that charts which have been changed or added since the review are dated 10/1/86; unchanged charts carry the review date 7/30/86. In addition, a narrative summary is presented of the study results and two appendices. The first appendix is a copy of the ATF Characteristics and Requirements Document generated as part of the study. The second appendix shows the inputs to the Space Station Mission Requirements Data Base submitted in May 1986. The report is issued in three volumes. Volume 1 contains an executive summary of the ATF mission, strawman design, and study results. Volume 2 contains the detailed study information. Volume 3 has the ATF cost estimate, and will have limited distribution.
Functional changes in CSF volume estimated using measurement of water T2 relaxation.
Piechnik, S K; Evans, J; Bary, L H; Wise, R G; Jezzard, P
2009-03-01
Cerebrospinal fluid (CSF) provides hydraulic suspension for the brain. The general concept of bulk CSF production, circulation, and reabsorption is well established, but the mechanisms of momentary CSF volume variation corresponding to vasoreactive changes are far less understood. Nine individuals were studied in a 3T MR scanner with a protocol that included visual stimulation using a 10-Hz reversing checkerboard and administration of a 5% CO(2) mix in air. We acquired PRESS-localized spin-echoes (TR = 12 sec, TE = 26 ms to 1.5 sec) from an 8-mL voxel located in the visual cortex. Echo amplitudes were fitted to a two-compartmental model of relaxation to estimate the partial volume of CSF and the T(2) relaxation times of the tissues. CSF signal contributed 10.7 +/- 3% of the total, with T(2,csf) = 503.0 +/- 64.3 [ms], T(2,brain) = 61.0 +/- 2 [ms]. The relaxation time of tissue increased during physiological stimulation, while the fraction of signal contributed by CSF decreased significantly by 5-6% with visual stimulation (P < 0.03) and by 3% under CO(2) inhalation (P < 0.08). The CSF signal fraction is shown to represent well the volume changes under viable physiological scenarios. In conclusion, CSF plays a significant role in buffering the changes in cerebral blood volume, especially during rapid functional stimuli. PMID:19132756
Direct Measurement of the Adsorbed Film Volume for Estimating Heats of Adsorption
NASA Astrophysics Data System (ADS)
Gillespie, Andrew; Dohnke, Elmar; Rash, Tyler; Stalla, David; Knight, Ernest; Seydel, Florian; Sweany, Mark; Pfeifer, Peter
Compressed hydrogen and methane require extremely high pressures or low temperatures in order to compete with the energy density of conventional fossil fuels. Adsorbent materials provide a means to increase the energy density of these gasses up to 6 times that of compressed gas at the same temperature and pressure. One major concern in engineering adsorbed gas systems is thermal management during charging and discharging. Adsorption is an exothermic process, releasing heat during charging and absorbing heat during discharging. To estimate the heat of adsorption, it is common to analyze excess adsorption isotherms by converting to absolute adsorption and employ the Clausius Clapeyron relation. However, this method requires an assumed volume of the adsorbed state. It is common for researchers to assume that the adsorbed film occupies the entire pore volume of the adsorbent material. However, the adsorbed film only occupies a fraction of the total pore volume. This yields heats of adsorption that are underestimated by as much as 10kJ/mol at high coverage. In this talk, we present a method to directly measure the adsorbed film volume as a function of temperature and present the resulting heats of adsorption for both methane and hydrogen.
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Reinitz, László Z; Bajzik, Gábor; Garamvölgyi, Rita; Petneházy, Örs; Lassó, András; Abonyi-Tóth, Zsolt; Lőrincz, Borbála; Sótonyi, Péter
2015-01-01
Dosages for myelography procedures in dogs are based on a hypothetical proportional relationship between bodyweight and cerebrospinal fluid (CSF) volume. Anecdotal radiographic evidence and recent studies have challenged the existence of such a defined relationship in dogs. The objectives of this prospective cross-sectional study were to describe CSF volumes using magnetic resonance imaging (MRI) in a group of clinically healthy dogs, measure the accuracy of MRI CSF volumes, and compare MRI CSF volumes with dog physical measurements. A sampling perfection with application optimized contrast using different flip-angle evolution MRI examination of the central nervous system was carried out on 12 healthy, male mongrel dogs, aged between 3 and 5 years with a bodyweight range of 7.5-35.0 kg. The images were processed with image analysis freeware (3D Slicer) in order to calculate the volume of extracranial CSF. Cylindrical phantoms of known volume were included in scans and used to calculate accuracy of MRI volume estimates. The accuracy of MRI volume estimates was 99.8%. Extracranial compartment CSF volumes ranged from 20.21 to 44.06 ml. Overall volume of the extracranial CSF increased linearly with bodyweight, but the proportional volume (ml/bodyweight kilograms) of the extracranial CSF was inversely proportional to bodyweight. Relative ratios of volumes in the cervical, thoracic, and lumbosacral regions were constant. Findings indicated that the current standard method of using body weight to calculate dosages of myelographic contrast agents in dogs may need to be revised. PMID:26311617
Intramyocardial capillary blood volume estimated by whole-body CT: validation by micro-CT
NASA Astrophysics Data System (ADS)
Dong, Yue; Beighley, Patricia E.; Eaker, Diane R.; Zamir, Mair; Ritman, Erik L.
2008-03-01
Fast CT has shown that myocardial perfusion (F) is related to myocardial intramuscular blood volume (Bv) as Bv=A*F+B*F 1/2 where A,B are constant coefficients. The goal of this study was to estimate the range of diameters of the vessels that are represented by the A*F term. Pigs were placed in an Electron Beam CT (EBCT) scanner for a perfusion CT scan sequence over 40 seconds after an IV contrast agent injection. Intramyocardial blood volume (Bv) and flow (F) were calculated in a region of the myocardium perfused by the LAD. Coefficients A and B were estimated over the range of F=1-5ml/g/min. After the CT scan, the LAD was injected with Microfil (R) contrast agent following which the myocardium was scanned by micro-CT at 20μm, 4μm and 2.5 μm cubic voxel resolutions. The Bv of the intramyocardial vessels was calculated for diameter ranges d=0-5, 5-10, 10-15, 15-20μm, etc. EBCT-derived data were presented so that it could be directly compared the micro-CT data. The results indicated that the blood in vessels less than 10μm in lumen diameter occupied 0.27-0.42 of total intravascular blood volume, which is in good agreement with EBCT-based values 0.28-0.48 (R2 =0.96). We conclude that whole-body CT image data obtained during the passage of a bolus of IV contrast agent can provide a measure of the intramyocardial intracapillary blood volume.
Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.
Gingerich, W.H.; Pityer, R.A.; Rach, J.J.
1987-01-01
Total blood volume and relative blood volumes in selected tissues were determined in non-anesthetized, confined rainbow trout by using super(51)Cr-labelled trout erythrocytes as a vascular space marker. Mean total blood volume was estimated to be 4.09 plus or minus 0.55 ml/100 g, or about 75% of that estimated with the commonly used plasma space marker Evans blue dye. Relative tissue blood volumes were greatest in highly perfused tissues such as kidney, gills, brain and liver and least in mosaic muscle. Estimates of tissue vascular spaces, made using radiolabelled erythrocytes, were only 25-50% of those based on plasma space markers. The consistently smaller vascular volumes obtained with labelled erythrocytes could be explained by assuming that commonly used plasma space markers diffuse from the vascular compartment.
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
Smith, S. Jerrod
2013-01-01
From the 1890s through the 1970s the Picher mining district in northeastern Ottawa County, Oklahoma, was the site of mining and processing of lead and zinc ore. When mining ceased in about 1979, as much as 165–300 million tons of mine tailings, locally referred to as “chat,” remained in the Picher mining district. Since 1979, some chat piles have been mined for aggregate materials and have decreased in volume and mass. Currently (2013), the land surface in the Picher mining district is covered by thousands of acres of chat, much of which remains on Indian trust land owned by allottees. The Bureau of Indian Affairs manages these allotted lands and oversees the sale and removal of chat from these properties. To help the Bureau of Indian Affairs better manage the sale and removal of chat, the U.S. Geological Survey, in cooperation with the Bureau of Indian Affairs, estimated the 2005 and 2010 volumes and masses of selected chat piles remaining on allotted lands in the Picher mining district. The U.S. Geological Survey also estimated the changes in volume and mass of these chat piles for the period 2005 through 2010. The 2005 and 2010 chat-pile volume and mass estimates were computed for 34 selected chat piles on 16 properties in the study area. All computations of volume and mass were performed on individual chat piles and on groups of chat piles in the same property. The Sooner property had the greatest estimated volume (4.644 million cubic yards) and mass (5.253 ± 0.473 million tons) of chat in 2010. Five of the selected properties (Sooner, Western, Lawyers, Skelton, and St. Joe) contained estimated chat volumes exceeding 1 million cubic yards and estimated chat masses exceeding 1 million tons in 2010. Four of the selected properties (Lucky Bill Humbah, Ta Mee Heh, Bird Dog, and St. Louis No. 6) contained estimated chat volumes of less than 0.1 million cubic yards and estimated chat masses of less than 0.1 million tons in 2010. The total volume of all
Estimating Volume, Biomass, and Carbon in Hedmark County, Norway Using a Profiling LiDAR
NASA Technical Reports Server (NTRS)
Nelson, Ross; Naesset, Erik; Gobakken, T.; Gregoire, T.; Stahl, G.
2009-01-01
A profiling airborne LiDAR is used to estimate the forest resources of Hedmark County, Norway, a 27390 square kilometer area in southeastern Norway on the Swedish border. One hundred five profiling flight lines totaling 9166 km were flown over the entire county; east-west. The lines, spaced 3 km apart north-south, duplicate the systematic pattern of the Norwegian Forest Inventory (NFI) ground plot arrangement, enabling the profiler to transit 1290 circular, 250 square meter fixed-area NFI ground plots while collecting the systematic LiDAR sample. Seven hundred sixty-three plots of the 1290 plots were overflown within 17.8 m of plot center. Laser measurements of canopy height and crown density are extracted along fixed-length, 17.8 m segments closest to the center of the ground plot and related to basal area, timber volume and above- and belowground dry biomass. Linear, nonstratified equations that estimate ground-measured total aboveground dry biomass report an R(sup 2) = 0.63, with an regression RMSE = 35.2 t/ha. Nonstratified model results for the other biomass components, volume, and basal area are similar, with R(sup 2) values for all models ranging from 0.58 (belowground biomass, RMSE = 8.6 t/ha) to 0.63. Consistently, the most useful single profiling LiDAR variable is quadratic mean canopy height, h (sup bar)(sub qa). Two-variable models typically include h (sup bar)(sub qa) or mean canopy height, h(sup bar)(sub a), with a canopy density or a canopy height standard deviation measure. Stratification by productivity class did not improve the nonstratified models, nor did stratification by pine/spruce/hardwood. County-wide profiling LiDAR estimates are reported, by land cover type, and compared to NFI estimates.
Herzog, Mark P; Ackerman, Joshua T; Eagles-Smith, Collin A; Hartman, C Alex
2016-05-01
In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster's tern (Sterna forsteri). Egg densities (g/cm(3)) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v = 0.491 ± 0.001; K w = 0.518 ± 0.001) or excluded (K v = 0.493 ± 0.001; K w = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6-13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w . PMID:26932462
Estimating Wood Volume for Pinus Brutia Trees in Forest Stands from QUICKBIRD-2 Imagery
NASA Astrophysics Data System (ADS)
Patias, Petros; Stournara, Panagiota
2016-06-01
Knowledge of forest parameters, such as wood volume, is required for a sustainable forest management. Collecting such information in the field is laborious and even not feasible in inaccessible areas. In this study, tree wood volume is estimated utilizing remote sensing techniques, which can facilitate the extraction of relevant information. The study area is the University Forest of Taxiarchis, which is located in central Chalkidiki, Northern Greece and covers an area of 58km2. The tree species under study is the conifer evergreen species P. brutia (Calabrian pine). Three plot surfaces of 10m radius were used. VHR Quickbird-2 images are used in combination with an allometric relationship connecting the Tree Crown with the Diameter at breast height (Dbh), and a volume table developed for Greece. The overall methodology is based on individual tree crown delineation, based on (a) the marker-controlled watershed segmentation approach and (b) the GEographic Object-Based Image Analysis approach. The aim of the first approach is to extract separate segments each of them including a single tree and eventual lower vegetation, shadows, etc. The aim of the second approach is to detect and remove the "noisy" background. In the application of the first approach, the Blue, Green, Red, Infrared and PCA-1 bands are tested separately. In the application of the second approach, NDVI and image brightness thresholds are utilized. The achieved results are evaluated against field plot data. Their observed difference are between -5% to +10%.
Estimating Basin Snow Volume Using Aerial LiDAR and Binary Regression Trees (Invited)
NASA Astrophysics Data System (ADS)
Shallcross, A. T.; McNamara, J. P.; Flores, A. N.; Marshall, H.; Marks, D. G.; Glenn, N. F.
2010-12-01
Snow cover derived from airborne LiDAR (Light Detection And Ranging) is combined with binary regression trees to improve the prediction of total basin snow volume for the Dry Creek Experimental Watershed (DCEW), ID. These methods are used to identify site-specific topographic controls on the spatial distribution of snow so that future point measurements of snow depth can be distributed through space efficiently. LiDAR is used to map snow cover by differencing the digital elevation models (DEMs) obtained from a snow-covered overflight and a snow-free overflight. Topographic parameters known to control snow distribution are calculated from the snow free LiDAR dataset. Here, mean vegetation height, slope, aspect, solar radiation, and elevation are used to predict snow depth via a binary regression tree using ten-fold cross-validation. The branches leading to the terminal nodes of the regression tree are used to segment the watershed into homogeneous snow distribution units. Preliminary results indicate that 23 statistically significant discrete units exist. Thus, during future field campaigns, point measurements of snow depth can be gathered and distributed throughout these units. Mean measured SWE/depth of each unit can be summed to determine the total basin snow volume. This method should decrease field time and improve the accuracy of basin snow volume estimates for watershed analyses.
Estimation of Residual Peritoneal Volume Using Technetium-99m Sulfur Colloid Scintigraphy.
Katopodis, Konstantinos P; Fotopoulos, Andrew D; Balafa, Olga C; Tsiouris, Spyridon Th; Triandou, Eleni G; Al-Bokharhli, Jichad B; Kitsos, Athanasios C; Dounousi, Evagelia C; Siamopoulos, Konstantinos C
2015-01-01
Residual peritoneal volume (RPV) may contribute in the development of ultrafiltration failure in patients with normal transcapillary ultrafiltration. The aim of this study was to estimate the RPV using intraperitoneal technetium-99m Sulfur Colloid (Tc). Twenty patients on peritoneal dialysis were studied. RPV was estimated by: 1) intraperitoneal instillation of Tc (RPV-Tc) and 2) classic Twardowski calculations using endogenous solutes, such as urea (RPV-u), creatinine (RPV-cr), and albumin (RPV-alb). Each method's reproducibility was assessed in a subgroup of patients in two consecutive measurements 48 h apart. Both methods displayed reproducibility (r = 0.93, p = 0.001 for RPVTc and r = 0.90, p = 0.001 for RPV-alb) between days 1 and 2, respectively. We found a statistically significant difference between RPV-Tc and RPV-cr measurements (347.3 ± 116.7 vs. 450.0 ± 67.8 ml; p =0.001) and RPV-u (515.5 ± 49.4 ml; p < 0.001), but not with RPV-alb (400.1 ± 88.2 ml; p = 0.308). A good correlation was observed only between RPV-Tc and RPV-alb (p < 0.001). The Tc method can estimate the RPV as efficiently as the high molecular weight endogenous solute measurement method. It can also provide an imaging estimate of the intraperitoneal distribution of RPV. PMID:25806615
Elliott, John G.; Flynn, Jennifer L.; Bossong, Clifford R.; Char, Stephen J.
2011-01-01
The subwatersheds with the greatest potential postwildfire and postprecipitation hazards are those with both high probabilities of debris-flow occurrence and large estimated volumes of debris-flow material. The high probabilities of postwildfire debris flows, the associated large estimated debris-flow volumes, and the densely populated areas along the creeks and near the outlets of the primary watersheds indicate that Indiana, Pennsylvania, and Spruce Creeks are associated with a relatively high combined debris-flow hazard.
A method to estimate the ice volume and ice-thickness distribution of alpine glaciers
NASA Astrophysics Data System (ADS)
Farinotti, Daniel; Huss, Matthias; Bauder, Andreas; Funk, Martin; Truffer, Martin
Sound knowledge of the ice volume and ice-thickness distribution of a glacier is essential for many glaciological applications. However, direct measurements of ice thickness are laborious, not feasible everywhere and necessarily restricted to a small number of glaciers. In this paper, we present a method to estimate the ice-thickness distribution and the total ice volume of alpine glaciers. This method is based on glacier mass turnover and principles of ice-flow mechanics. The required input data are the glacier surface topography, the glacier outline and a set of borders delineating different 'ice-flow catchments'. Three parameters describe the distribution of the 'apparent mass balance', which is defined as the difference between the glacier surface mass balance and the rate of ice-thickness change, and two parameters define the ice-flow dynamics. The method was developed and validated on four alpine glaciers located in Switzerland, for which the bedrock topography is partially known from radio-echo soundings. The ice thickness along 82 cross-profiles can be reproduced with an average deviation of about 25% between the calculated and the measured ice thickness. The cross-sectional areas differ by less than 20% on average. This shows the potential of the method for estimating the ice-thickness distribution of alpine glaciers without the use of direct measurements.
Volume estimation of tonsil phantoms using an oral camera with 3D imaging.
Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh
2016-04-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
Harbers, Jasper V; Huijbregts, Mark A J; Posthuma, Leo; Van de Meent, Dik
2006-03-01
Although many chemicals are in use, the environmental impacts of only a few have been established, usually on per-chemical basis. Uncertainty remains about the overall impact of chemicals. This paper estimates combined toxic pressure on coastal North Sea ecosystems from 343 high-production-volume chemicals used within the catchment of rivers Rhine, Meuse, and Scheldt. Multimedia fate modeling and species sensitivity distribution-based effects estimation are applied. Calculations start from production volumes and emission rates and use physicochemical substance properties and aquatic ecotoxicity data. Parameter uncertainty is addressed by Monte Carlo simulations. Results suggest that the procedure is technically feasible. Combined toxic pressure of all 343 chemicals in coastal North Seawater is 0.025 (2.5% of the species are exposed to concentration levels above EC50 values), with a wide confidence interval of nearly 0-1. This uncertainty appears to be largely due to uncertainties in interspecies variances of aquatic toxicities and, to a lesser extent, to uncertainties in emissions and degradation rates. Due to these uncertainties, the results support gross ranking of chemicals in categories: negligible and possibly relevant contributions only. With 95% confidence, 283 of the 343 chemicals (83%) contribute negligibly (less than 0.1%) to overall toxic pressure, and only 60 (17%) need further consideration. PMID:16568772
Limitations of Stroke Volume Estimation by Non-Invasive Blood Pressure Monitoring in Hypergravity
2015-01-01
Background Altitude and gravity changes during aeromedical evacuations induce exacerbated cardiovascular responses in unstable patients. Non-invasive cardiac output monitoring is difficult to perform in this environment with limited access to the patient. We evaluated the feasibility and accuracy of stroke volume estimation by finger photoplethysmography (SVp) in hypergravity. Methods Finger arterial blood pressure (ABP) waveforms were recorded continuously in ten healthy subjects before, during and after exposure to +Gz accelerations in a human centrifuge. The protocol consisted of a 2-min and 8-min exposure up to +4 Gz. SVp was computed from ABP using Liljestrand, systolic area, and Windkessel algorithms, and compared with reference values measured by echocardiography (SVe) before and after the centrifuge runs. Results The ABP signal could be used in 83.3% of cases. After calibration with echocardiography, SVp changes did not differ from SVe and values were linearly correlated (p<0.001). The three algorithms gave comparable SVp. Reproducibility between SVp and SVe was the best with the systolic area algorithm (limits of agreement −20.5 and +38.3 ml). Conclusions Non-invasive ABP photoplethysmographic monitoring is an interesting technique to estimate relative stroke volume changes in moderate and sustained hypergravity. This method may aid physicians for aeronautic patient monitoring. PMID:25798613
Wille, Marie-Luise; Langton, Christian M
2016-02-01
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R(2)=99.9% and R(2)=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment. PMID:26455950
Volume estimation of tonsil phantoms using an oral camera with 3D imaging
Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh
2016-01-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
A novel optical method for estimating the near-wall volume fraction in granular flows
NASA Astrophysics Data System (ADS)
Sarno, Luca; Nicolina Papa, Maria; Carleo, Luigi; Tai, Yih-Chin
2016-04-01
Geophysical phenomena, such as debris flows, pyroclastic flows and rock avalanches, involve the rapid flow of granular mixtures. Today the dynamics of these flows is far from being deeply understood, due to their huge complexity compared to clear water or monophasic fluids. To this regard, physical models at laboratory scale represent important tools for understanding the still unclear properties of granular flows and their constitutive laws, under simplified experimental conditions. Beside the velocity and the shear rate, the volume fraction is also strongly interlinked with the rheology of granular materials. Yet, a reliable estimation of this quantity is not easy through non-invasive techniques. In this work a novel cost-effective optical method for estimating the near-wall volume fraction is presented and, then, applied to a laboratory study on steady-state granular flows. A preliminary numerical investigation, through Monte-Carlo generations of grain distributions under controlled illumination conditions, allowed to find the stochastic relationship between the near-wall volume fraction, c3D, and a measurable quantity (the two-dimensional volume fraction), c2D, obtainable through an appropriate binarization of gray-scale images captured by a camera placed in front of the transparent boundary. Such a relation can be well described by c3D = aexp(bc2D), with parameters only depending on the angle of incidence of light, ζ. An experimental validation of the proposed approach is carried out on dispersions of white plastic grains, immersed in various ambient fluids. The mixture, confined in a box with a transparent window, is illuminated by a flickering-free LED lamp, placed so as to form a given ζ with the measuring surface, and is photographed by a camera, placed in front of the same window. The predicted exponential law is found to be in sound agreement with experiments for a wide range of ζ (10° <ζ<45°). The technique is, then, applied to steady-state dry
Tomiyama, Yuuki; Yoshinaga, Keiichiro; Fujii, Satoshi; Ochi, Noriki; Inoue, Mamiko; Nishida, Mutumi; Aziki, Kumi; Horie, Tatsunori; Katoh, Chietsugu; Tamaki, Nagara
2015-01-01
Increasing vascular diameter and attenuated vascular elasticity may be reliable markers for atherosclerotic risk assessment. However, previous measurements have been complex, operator-dependent or invasive. Recently, we developed a new automated oscillometric method to measure a brachial artery's estimated area (eA) and volume elastic modulus (VE). The aim of this study was to investigate the reliability of new automated oscillometric measurement of eA and VE. Rest eA and VE were measured using the recently developed automated detector with the oscillometric method. eA was estimated using pressure/volume curves and VE was defined as follows (VE=Δ pressure/ (100 × Δ area/area) mm Hg/%). Sixteen volunteers (age 35.2±13.1 years) underwent the oscillometric measurements and brachial ultrasound at rest and under nitroglycerin (NTG) administration. Oscillometric measurement was performed twice on different days. The rest eA correlated with ultrasound-measured brachial artery area (r=0.77, P<0.001). Rest eA and VE measurement showed good reproducibility (eA: intraclass correlation coefficient (ICC)=0.88, VE: ICC=0.78). Under NTG stress, eA was significantly increased (12.3±3.0 vs. 17.1±4.6 mm2, P<0.001), and this was similar to the case with ultrasound evaluation (4.46±0.72 vs. 4.73±0.75 mm, P<0.001). VE was also decreased (0.81±0.16 vs. 0.65±0.11 mm Hg/%, P<0.001) after NTG. Cross-sectional vascular area calculated using this automated oscillometric measurement correlated with ultrasound measurement and showed good reproducibility. Therefore, this is a reliable approach and this modality may have practical application to automatically assess muscular artery diameter and elasticity in clinical or epidemiological settings. PMID:25693851
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion
NASA Astrophysics Data System (ADS)
Oberreuter, J.; Gacitúa, G.; Uribe, J.; Rivera, A.; Zamora, R.; Loriaux, T.
2013-12-01
Central Chilean glaciers (33-35°S) are an important melt water resource for human consumption, agriculture, mining and industrial activities in this, the most populated region of the country. These glaciers have been retreating and shrinking during recent decades, in response to ongoing climatic changes. As a result, there is increasing concern about future water availability especially during dry summers, when glaciers are thought to have the maximum contribution to runoff. In spite of their importance, very little is known about the total volume of water equivalent storage in these glaciers. In order to improve our knowledge about this issue, we have utilized a new airborne radar system, which was developed at CECs, specially designed to penetrate temperate and cold ice, which is working at central frequencies between 20 and 60 MHz, depending on the penetration range capacity at each glacier. This system has been installed on helicopters, where the metal structure antenna (receptor and transmitter) is carried as a hanging load while flying along pre designated tracks, enabling to survey steep and remote glacier areas, many of them without any ice thickness data up to date. The helicopter is geo-located using dual frequency GPS receivers and an inertial navigation unit installed onboard, and each measurement is geo referenced using a pointing laser located at the radar antenna. The antenna must be flown at 40 m above the glacier surface at an air speed of 40 knots. This system has been successfully used on 24 glaciers representing 16% of the total glacier area of the Aconcagua, Maipo and Rapel basins. A mean ice thickness of 168 m and a maximum of 342 m were detected among the surveyed glaciers. Crossing points between overlapping surveyed tracks resulted in mean differences of near 20 m (less than 10% of the total ice thickness). Subsequent ice volumes were calculated by interpolating radar data collected along tracks. These volumetric estimations correlated
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Cortical thickness measurement from magnetic resonance images using partial volume estimation
NASA Astrophysics Data System (ADS)
Zuluaga, Maria A.; Acosta, Oscar; Bourgeat, Pierrick; Hernández Hoyos, Marcela; Salvado, Olivier; Ourselin, Sébastien
2008-03-01
Measurement of the cortical thickness from 3D Magnetic Resonance Imaging (MRI) can aid diagnosis and longitudinal studies of a wide range of neurodegenerative diseases. We estimate the cortical thickness using a Laplacian approach whereby equipotentials analogous to layers of tissue are computed. The thickness is then obtained using an Eulerian approach where partial differential equations (PDE) are solved, avoiding the explicit tracing of trajectories along the streamlines gradient. This method has the advantage of being relatively fast and insure unique correspondence points between the inner and outer boundaries of the cortex. The original method is challenged when the thickness of the cortex is of the same order of magnitude as the image resolution since partial volume (PV) effect is not taken into account at the gray matter (GM) boundaries. We propose a novel way to take into account PV which improves substantially accuracy and robustness. We model PV by computing a mixture of pure Gaussian probability distributions and use this estimate to initialize the cortical thickness estimation. On synthetic phantoms experiments, the errors were divided by three while reproducibility was improved when the same patients was scanned three consecutive times.
D'Alessandro, Brian; Dhawan, Atam P
2012-11-01
Subsurface information about skin lesions, such as the blood volume beneath the lesion, is important for the analysis of lesion severity towards early detection of skin cancer such as malignant melanoma. Depth information can be obtained from diffuse reflectance based multispectral transillumination images of the skin. An inverse volume reconstruction method is presented which uses a genetic algorithm optimization procedure with a novel population initialization routine and nudge operator based on the multispectral images to reconstruct the melanin and blood layer volume components. Forward model evaluation for fitness calculation is performed using a parallel processing voxel-based Monte Carlo simulation of light in skin. Reconstruction results for simulated lesions show excellent volume accuracy. Preliminary validation is also done using a set of 14 clinical lesions, categorized into lesion severity by an expert dermatologist. Using two features, the average blood layer thickness and the ratio of blood volume to total lesion volume, the lesions can be classified into mild and moderate/severe classes with 100% accuracy. The method therefore has excellent potential for detection and analysis of pre-malignant lesions. PMID:22829392
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Loubère, Raphaël
2016-08-01
In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order
Using LiDAR to Estimate Surface Erosion Volumes within the Post-storm 2012 Bagley Fire
NASA Astrophysics Data System (ADS)
Mikulovsky, R. P.; De La Fuente, J. A.; Mondry, Z. J.
2014-12-01
The total post-storm 2012 Bagley fire sediment budget of the Squaw Creek watershed in the Shasta-Trinity National Forest was estimated using many methods. A portion of the budget was quantitatively estimated using LiDAR. Simple workflows were designed to estimate the eroded volume's of debris slides, fill failures, gullies, altered channels and streams. LiDAR was also used to estimate depositional volumes. Thorough manual mapping of large erosional features using the ArcGIS 10.1 Geographic Information System was required as these mapped features determined the eroded volume boundaries in 3D space. The 3D pre-erosional surface for each mapped feature was interpolated based on the boundary elevations. A surface difference calculation was run using the estimated pre-erosional surfaces and LiDAR surfaces to determine volume of sediment potentially delivered into the stream system. In addition, cross sections of altered channels and streams were taken using stratified random selection based on channel gradient and stream order respectively. The original pre-storm surfaces of channel features were estimated using the cross sections and erosion depth criteria. Open source software Inkscape was used to estimate cross sectional areas for randomly selected channel features and then averaged for each channel gradient and stream order classes. The average areas were then multiplied by the length of each class to estimate total eroded altered channel and stream volume. Finally, reservoir and in-channel depositional volumes were estimated by mapping channel forms and generating specific reservoir elevation zones associated with depositional events. The in-channel areas and zones within the reservoir were multiplied by estimated and field observed sediment thicknesses to attain a best guess sediment volume. In channel estimates included re-occupying stream channel cross sections established before the fire. Once volumes were calculated, other erosion processes of the Bagley
A maximum volume density estimator generalized over a proper motion-limited sample
NASA Astrophysics Data System (ADS)
Lam, Marco C.; Rowell, Nicholas; Hambly, Nigel C.
2015-07-01
The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude-limited sample. Previously, efforts have been made to generalize it for populations with non-uniform density and proper motion-limited cases. This work shows that the then-good assumptions for a proper motion-limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the local standard of rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalized maximum volume integrand. The treatment can be applied to any proper motion-limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.
MCNP ESTIMATE OF THE SAMPLED VOLUME IN A NON-DESTRUCTIVE IN SITU SOIL CARBON ANALYSIS.
WIELOPOLSKI, L.; DIOSZEGI, I.; MITRA, S.
2004-05-03
Global warming, promoted by anthropogenic CO{sub 2} emission into the atmosphere, is partially mitigated by the photosynthesis processes of the terrestrial echo systems that act as atmospheric CO{sub 2} scrubbers and sequester carbon in soil. Switching from till to no till soils management practices in agriculture further augments this process. Carbon sequestration is also advanced by putting forward a carbon ''credit'' system whereby these can be traded between CO{sub 2} producers and sequesters. Implementation of carbon ''credit'' trade will be further promulgated by recent development of a non-destructive in situ carbon monitoring system based on inelastic neutron scattering (INS). Volumes and depth distributions defined by the 0.1, 1.0, 10, 50, and 90 percent neutron isofluxes, from a point source located at either 5 or 30 cm above the surface, were estimated using Monte Carlo calculations.
Water volume estimates of the Greenland Perennial Firn Aquifer from in situ measurements
NASA Astrophysics Data System (ADS)
Koenig, L.; Miege, C.; Forster, R. R.; Brucker, L.
2013-12-01
Improving our understanding of the complex Greenland hydrologic system is necessary for assessing change across the Greenland Ice Sheet and its contribution to sea level rise (SLR). A new component of the Greenland hydrologic system, a Perennial Firn Aquifer (PFA), was recently discovered in April 2011. The PFA represents a large storage of liquid water within the Greenland Ice Sheet with an area of 70,000 × 10,000 km2 simulated by the RACMO2/GR regional climate model which closely follows airborne radar-derived mapping (Forster et al., in press). The average top surface depth of the PFA as detected by radar is 23 m. In April 2013, our team drilled through the PFA for the first time to gain an understanding of firn structure constraining the PFA, to estimate the water volume within the PFA, and to measure PFA temperatures and densities. At our drill site in Southeast Greenland (~100 km Northwest of Kulusuk), water fills or partially fills the available firn pore space from depths of ~12 to 37 m. The temperature within the PFA depths is constant at 0.1 × 0.1° C while the 12 m of seasonally dry firn above the PFA has a temperature profile dominated by surface temperature forcing. Near the bottom of the PFA water completely fills available pore space as the firn is compressed to ice entrapping water filled bubbles, as opposed to air filled bubbles, which then start to refreeze. A PFA maximum density is reached as the water filling the pore space, increasing density, begins refreezing back into ice at a lower density. We define this depth as the pore water refreeze depth and use this depth as the bottom of the PFA to calculate volume. It is certain, however that a small amount of water does exist below this depth, which we do not account for. The density profile obtained from the ACT11B firn core, the closest seasonally dry firn core, is compared to both gravitational densities and high resolution densities derived from a neutron density probe at the PFA site. The
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Statistical analyses of nuclear waste level measurements to estimate retained gas volumes
NASA Astrophysics Data System (ADS)
Whitney, Paul D.; Chen, Guang
1999-01-01
The Hanford site is home to 177 large, underground nuclear waste storage tanks. Numerous safety and environmental concerns around these tanks and their contents. One such concern is the propensity for the waste in these tanks to generate and retain flammable gases. The surface level of the waste in these tanks is routinely monitored to assess whether the tanks are leaking. For some of the tanks, the waste surface level measurements synchronously fluctuated with atmospheric pressure changes. The current best explanation for these synchronous fluctuations is that the waste contains gas-phase material that changes volume in response to the atmospheric pressure changes. This paper describes: (1) The exploratory data analysis that led to the discovery of the phenomena; (2) A physical mode based on the ideal gas law that explains the phenomena. Additionally, the model allows one to obtain estimates of the retained gas volume in the tank waste; (3) A statistical procedure for detecting retained gas based on the physical model and tank surface level measurements; and (4) A Kalman filter model for analyzing the dynamics of retained gas. It's also shown how the filter can be used to detect abrupt changes in the system.
Scatter to volume registration for model-free respiratory motion estimation from dynamic MRIs.
Miao, S; Wang, Z J; Pan, L; Butler, J; Moran, G; Liao, R
2016-09-01
Respiratory motion is one major complicating factor in many image acquisition applications and image-guided interventions. Existing respiratory motion estimation and compensation methods typically rely on breathing motion models learned from certain training data, and therefore may not be able to effectively handle intra-subject and/or inter-subject variations of respiratory motion. In this paper, we propose a respiratory motion compensation framework that directly recovers motion fields from sparsely spaced and efficiently acquired dynamic 2-D MRIs without using a learned respiratory motion model. We present a scatter-to-volume deformable registration algorithm to register dynamic 2-D MRIs with a static 3-D MRI to recover dense deformation fields. Practical considerations and approximations are provided to solve the scatter-to-volume registration problem efficiently. The performance of the proposed method was investigated on both synthetic and real MRI datasets, and the results showed significant improvements over the state-of-art respiratory motion modeling methods. We also demonstrated a potential application of the proposed method on MRI-based motion corrected PET imaging using hybrid PET/MRI. PMID:27180910
Head motion during MRI acquisition reduces gray matter volume and thickness estimates.
Reuter, Martin; Tisdall, M Dylan; Qureshi, Abid; Buckner, Randy L; van der Kouwe, André J W; Fischl, Bruce
2015-02-15
Imaging biomarkers derived from magnetic resonance imaging (MRI) data are used to quantify normal development, disease, and the effects of disease-modifying therapies. However, motion during image acquisition introduces image artifacts that, in turn, affect derived markers. A systematic effect can be problematic since factors of interest like age, disease, and treatment are often correlated with both a structural change and the amount of head motion in the scanner, confounding the ability to distinguish biology from artifact. Here we evaluate the effect of head motion during image acquisition on morphometric estimates of structures in the human brain using several popular image analysis software packages (FreeSurfer 5.3, VBM8 SPM, and FSL Siena 5.0.7). Within-session repeated T1-weighted MRIs were collected on 12 healthy volunteers while performing different motion tasks, including two still scans. We show that volume and thickness estimates of the cortical gray matter are biased by head motion with an average apparent volume loss of roughly 0.7%/mm/min of subject motion. Effects vary across regions and remain significant after excluding scans that fail a rigorous quality check. In view of these results, the interpretation of reported morphometric effects of movement disorders or other conditions with increased motion tendency may need to be revisited: effects may be overestimated when not controlling for head motion. Furthermore, drug studies with hypnotic, sedative, tranquilizing, or neuromuscular-blocking substances may contain spurious "effects" of reduced atrophy or brain growth simply because they affect motion distinct from true effects of the disease or therapeutic process. PMID:25498430
Head Motion during MRI Acquisition Reduces Gray Matter Volume and Thickness Estimates
Reuter, Martin; Tisdall, M. Dylan; Qureshi, Abid; Buckner, Randy L.; van der Kouwe, André J. W.; Fischl, Bruce
2014-01-01
Imaging biomarkers derived from magnetic resonance imaging (MRI) data are used to quantify normal development, disease, and the effects of disease-modifying therapies. However, motion during image acquisition introduces image artifacts that, in turn, affect derived markers. A systematic effect can be problematic since factors of interest like age, disease, and treatment are often correlated with both a structural change and the amount of head motion in the scanner, confounding the ability to distinguish biology from artifact. Here we evaluate the effect of head motion during image acquisition on morphometric estimates of structures in the human brain using several popular image analysis software packages (FreeSurfer 5.3, VBM8 SPM, and FSL Siena 5.0.7). Within-session repeated T1-weighted MRIs were collected on 12 healthy volunteers while performing different motion tasks, including two still scans. We show that volume and thickness estimates of the cortical gray matter are biased by head motion with an average apparent volume loss of roughly 0.7%/mm/min of subject motion. Effects vary across regions and remain significant after excluding scans that fail a rigorous quality check. In view of these results, the interpretation of reported morphometric effects of movement disorders or other conditions with increased motion tendency may need to be revisited: effects may be overestimated when not controlling for head motion. Furthermore, drug studies with hypnotic, sedative, tranquillizing, or neuromuscular-blocking substances may contain spurious “effects” of reduced atrophy or brain growth simply because they affect motion distinct from true effects of the disease or therapeutic process. PMID:25498430
Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz
2009-09-01
The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform. PMID:19328585
Jennesseaux, C; Metz, D; Maillier, B; Nazeyrollas, P; Maes, D; Tassan, S; Chabert, J P; Elaerts, J
1996-07-01
The object of this study was to assess the reliability of measurements of left ventricular volumes and ejection fraction by acoustic quantification by the method of summation of discs in acute myocardial infarction. Thirty-two patients with an average age of 55.9 +/- 12 years were studied prospectively on average 6 +/- 2 days after the onset of myocardial infarction. Within 48 hours, the patients underwent TM echocardiography (Teichholz's method) two-dimensional echocardiography (Simpson's method on freeze frames and acoustic quantification) before left ventricular angiography and isotopic ventriculography, considered as the reference methods for comparing left ventricular volumes and ejection fractions. The data displayed in real time by acoustic quantification correlated well with the results of left ventricular angiography (r = 0.77; p = 0.0001) and moderately underestimated (+4.1 +/- 11.9%) the ejection fraction, but were relatively disappointing for estimating volumes. When compared with isotopic ejection fraction, the correlation coefficient was r = 0.71 (p = 0.0004) and the values were overestimated. In this study, acoustic quantification was the most reliable echocardiographic method of assessing the left ventricular ejection fraction with reference to contrast angiography (Teichholz: r = 0.56; p = 0.0014; Simpson: r = 0.76; p = 0.001). The authors conclude that assessing the left ventricular ejection fraction with acoustic quantification is reliable in acute myocardial infarction. However, the method is not very accurate in measuring end systolic and end diastolic volumes. PMID:8869245
NASA Astrophysics Data System (ADS)
Hammer, Patrick; Naguib, Ahmed; Koochesfahani, Manoochehr
2015-11-01
The proper estimation of thrust is very important for understanding the aerodynamics of oscillating airfoils at low chord Reynolds number Re. Although direct force measurement is possible, force values at low Re are often small, and separation of the test-model's inertia forces from the data may not be straightforward. A common alternative is a control-volume (CV) approach, where terms in the integral momentum equation are computed from measured wake velocity profiles. Although it is acceptable to use only the mean streamwise-velocity profile in estimating the streamwise force on stationary airfoils, recent work has highlighted the importance of terms relating the velocity fluctuation and pressure distribution in the wake for unsteady airfoils. The goal of the present work is to capitalize on 2D computational data for a harmonically pitching airfoil at Re in the range 2,000-22,000, where all terms in the momentum-integral equation are accessible, to evaluate the importance of the various terms in the equation and assess the accuracy of the assumptions that are typically made in experiments due to the difficulty in measuring certain terms (such as the wake pressure distribution) by comparing the CV results with the actual computed thrust. This work was supported by AFOSR grant number FA9550-10-1-0342.
Programmatic methods for addressing contaminated volume uncertainties.
DURHAM, L.A.; JOHNSON, R.L.; RIEMAN, C.R.; SPECTOR, H.L.; Environmental Science Division; U.S. ARMY CORPS OF ENGINEERS BUFFALO DISTRICT
2007-01-01
Accurate estimates of the volumes of contaminated soils or sediments are critical to effective program planning and to successfully designing and implementing remedial actions. Unfortunately, data available to support the preremedial design are often sparse and insufficient for accurately estimating contaminated soil volumes, resulting in significant uncertainty associated with these volume estimates. The uncertainty in the soil volume estimates significantly contributes to the uncertainty in the overall project cost estimates, especially since excavation and off-site disposal are the primary cost items in soil remedial action projects. The Army Corps of Engineers Buffalo District's experience has been that historical contaminated soil volume estimates developed under the Formerly Utilized Sites Remedial Action Program (FUSRAP) often underestimated the actual volume of subsurface contaminated soils requiring excavation during the course of a remedial activity. In response, the Buffalo District has adopted a variety of programmatic methods for addressing contaminated volume uncertainties. These include developing final status survey protocols prior to remedial design, explicitly estimating the uncertainty associated with volume estimates, investing in predesign data collection to reduce volume uncertainties, and incorporating dynamic work strategies and real-time analytics in predesign characterization and remediation activities. This paper describes some of these experiences in greater detail, drawing from the knowledge gained at Ashland1, Ashland2, Linde, and Rattlesnake Creek. In the case of Rattlesnake Creek, these approaches provided the Buffalo District with an accurate predesign contaminated volume estimate and resulted in one of the first successful FUSRAP fixed-price remediation contracts for the Buffalo District.
Puigdellívol-Sánchez, A; Prats-Galino, A; Reina, M A; Machés, F; Hernández, J M; De Andrés, J; van Zundert, A
2011-01-01
Three-dimensional (3D) image-reconstruction of structures inside the spinal canal certainly produces relevant data of interest in regional anesthesia. Nowadays, all hospital MRI equipment is designed mainly for clinical diagnostic purposes. In order to overcome the limitations we have produced more accurate images of structures contained inside the spinal canal using different software, validating our quantitative results with those obtained with standard hospital MRI equipment. Neuroanatomical 3D reconstruction using Amira software, including detailed manual edition was compared with semi-automatic 3D segmentation for CSF volume calculations by commonly available software linked to the MR equipment (MR hospital). Axial sections from seven patients were grouped in two aligned blocks (T1 Fast Field Eco 3D and T2 Balance Fast Field Eco 3D-resolution 0,65 x 0,65 x 0,65 mm, 130 mm length, 400 sections per case). T2 weighted was used for CSF volume estimations. The selected program allowed us to reconstruct 3D images of human vertebrae, dural sac, epidural fat, CSF and nerve roots. The CSF volume, including the amount contained inside nerve roots, was calculated. Different segmentation thresholds were used, but the CSF volume estimations showed high correlation between both teams (Pearson coefficient = 0.98, p = 0.003 for lower blocks; Pearson 0.89, p = 0.042 for upper blocks). The mean estimated value of CSF volume in lower blocks (L3-S1) was 15.8 + 2.9 ml (Amira software) and 13.1 +/- 1.9 ml (software linked to the MR equipment) and in upper blocks (T11-L2) was 21 +/- 4.47 ml and 18.9 +/- 3.5 ml, respectively. A high variability was detected among cases, without correlation with either weight, height or body mass index. Aspects concerning the partial volume effect are also discussed. Quick semi-automatic hospital 3D reconstructions give results close to detailed neuroanatomical 3D reconstruction and could be used in the future for individual quantification of
A New Approach for Deep Gray Matter Analysis Using Partial-Volume Estimation
Bonnier, Guillaume; Kober, Tobias; Schluep, Myriam; Du Pasquier, Renaud; Krueger, Gunnar; Meuli, Reto
2016-01-01
Introduction The existence of partial volume effects in brain MR images makes it challenging to understand physio-pathological alterations underlying signal changes due to pathology across groups of healthy subjects and patients. In this study, we implement a new approach to disentangle gray and white matter alterations in the thalamus and the basal ganglia. The proposed method was applied to a cohort of early multiple sclerosis (MS) patients and healthy subjects to evaluate tissue-specific alterations related to diffuse inflammatory or neurodegenerative processes. Method Forty-three relapsing-remitting MS patients and nineteen healthy controls underwent 3T MRI including: (i) fluid-attenuated inversion recovery, double inversion recovery, magnetization-prepared gradient echo for lesion count, and (ii) T1 relaxometry. We applied a partial volume estimation algorithm to T1 relaxometry maps to gray and white matter local concentrations as well as T1 values characteristic of gray and white matter in the thalamus and the basal ganglia. Statistical tests were performed to compare groups in terms of both global T1 values, tissue characteristic T1 values, and tissue concentrations. Results Significant increases in global T1 values were observed in the thalamus (p = 0.038) and the putamen (p = 0.026) in RRMS patients compared to HC. In the Thalamus, the T1 increase was associated with a significant increase in gray matter characteristic T1 (p = 0.0016) with no significant effect in white matter. Conclusion The presented methodology provides additional information to standard MR signal averaging approaches that holds promise to identify the presence and nature of diffuse pathology in neuro-inflammatory and neurodegenerative diseases. PMID:26845760
Estimation of the possible flood discharge and volume of stormwater for designing water storage.
Kirzhner, Felix; Kadmon, Avri
2011-01-01
The shortage of good-quality water resources is an important issue in arid and semiarid zones. Stormwater-harvesting systems that are capable of delivering good-quality wastewater for non-potable uses while taking into account environmental and health requirements must be developed. For this reason, the availability of water resources of marginal quality, like stormwater, can be a significant contribution to the water supply. Current stormwater management practices in the world require the creation of control systems that monitor quality and quantity of the water and the development of stormwater basins to store increased runoff volumes. Public health and safety considerations should be considered. Urban and suburban development, with the creation of buildings and roads and innumerable related activities, turns rain and snow into unwitting agents of damage to our nation's waterways. This urban and suburban runoff, legally known as stormwater, is one of the most significant sources of water pollution in the world. Based on various factors like water quality, runoff flow rate and speed, and the topography involved, stormwater can be directed into basins, purification plants, or to the sea. Accurate floodplain maps are the key to better floodplain management. The aim of this work is to use geographic information systems (GIS) to monitor and control the effect of stormwater. The graphic and mapping capabilities of GIS provide strong tools for conveying information and forecasts of different storm-water flow and buildup scenarios. Analyses of hydrologic processes, rainfall simulations, and spatial patterns of water resources were performed with GIS, which means, based on integrated data set, the flow of the water was introduced into the GIS. Two cases in Israel were analyzed--the Hula Project (the Jordan River floods over the peat soil area) and the Kishon River floodplains as it existed in the Yizrael Valley. PMID:22435327
Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques
2012-04-01
Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared. PMID:22365336
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.
Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H
2016-01-01
A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera
Reyes, Bersain A.; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H.
2016-01-01
A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M.
2015-01-01
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system. PMID:25902504
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems.
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M
2015-05-01
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system. PMID:25902504
1995-09-01
The Solid Waste Retrieval Facility--Phase 1 (Project W113) will provide the infrastructure and the facility required to retrieve from Trench 04, Burial ground 4C, contact handled (CH) drums and boxes at a rate that supports all retrieved TRU waste batching, treatment, storage, and disposal plans. This includes (1) operations related equipment and facilities, viz., a weather enclosure for the trench, retrieval equipment, weighing, venting, obtaining gas samples, overpacking, NDE, NDA, shipment of waste and (2) operations support related facilities, viz., a general office building, a retrieval staff change facility, and infrastructure upgrades such as supply and routing of water, sewer, electrical power, fire protection, roads, and telecommunication. Title I design for the operations related equipment and facilities was performed by Raytheon/BNFL, and that for the operations support related facilities including infrastructure upgrade was performed by KEH. These two scopes were combined into an integrated W113 Title II scope that was performed by Raytheon/BNFL. This volume represents the total estimated costs for the W113 facility. Operating Contractor Management costs have been incorporated as received from WHC. The W113 Facility TEC is $19.7 million. This includes an overall project contingency of 14.4% and escalation of 17.4%. A January 2001 construction contract procurement start date is assumed.
Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions
NASA Astrophysics Data System (ADS)
White, Randall; McCausland, Wendy
2016-01-01
We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations
Voxel-Based Approach for Estimating Urban Tree Volume from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Vonderach, C.; Voegtle, T.; Adler, P.
2012-07-01
The importance of single trees and the determination of related parameters has been recognized in recent years, e.g. for forest inventories or management. For urban areas an increasing interest in the data acquisition of trees can be observed concerning aspects like urban climate, CO2 balance, and environmental protection. Urban trees differ significantly from natural systems with regard to the site conditions (e.g. technogenic soils, contaminants, lower groundwater level, regular disturbance), climate (increased temperature, reduced humidity) and species composition and arrangement (habitus and health status) and therefore allometric relations cannot be transferred from natural sites to urban areas. To overcome this problem an extended approach was developed for a fast and non-destructive extraction of branch volume, DBH (diameter at breast height) and height of single trees from point clouds of terrestrial laser scanning (TLS). For data acquisition, the trees were scanned with highest scan resolution from several (up to five) positions located around the tree. The resulting point clouds (20 to 60 million points) are analysed with an algorithm based on voxel (volume elements) structure, leading to an appropriate data reduction. In a first step, two kinds of noise reduction are carried out: the elimination of isolated voxels as well as voxels with marginal point density. To obtain correct volume estimates, the voxels inside the stem and branches (interior voxels) where voxels contain no laser points must be regarded. For this filling process, an easy and robust approach was developed based on a layer-wise (horizontal layers of the voxel structure) intersection of four orthogonal viewing directions. However, this procedure also generates several erroneous "phantom" voxels, which have to be eliminated. For this purpose the previous approach was extended by a special region growing algorithm. In a final step the volume is determined layer-wise based on the extracted
Thrippleton, Michael J.; Munro, Kirsty I.; McKillop, Graham; Newby, David E.; Marshall, Ian; Roberts, Neil
2015-01-01
The aim of our study was to develop a reliable technique for measuring volume of the fibroid uterus using Magnetic Resonance Imaging. We applied the Cavalieri method and standard calliper technique to measure the volume of the uterus and largest fibroid in 26 patients, and results were compared with “gold-standard” planimetry measurements. We found Cavalieri measurements to be unbiased, while calliper measurements systematically underestimated uterine volume (− 13.2%, P < 10−5) and had greater variance. Repeatability was similar for the 2 techniques (standard deviation [SD] = 4.0%-6.9%). Reproducibility of Cavalieri measurements was higher for measurement of uterine (SD = 9.0%) than fibroid volume (SD = 19.1%), whereas the reproducibility of calliper measurements was higher for fibroid (SD = 9.1%) than uterine volume (SD = 15.9%). The additional measurement time for the Cavalieri method was approximately 1 to 2 minutes. In conclusion, the Cavalieri method permits more accurate measurement of uterine and fibroid volumes and is suitable for application in both clinical practice and scientific research. PMID:25332217
NASA Technical Reports Server (NTRS)
McCurry, J. B.
1995-01-01
The purpose of the TA-2 contract was to provide advanced launch vehicle concept definition and analysis to assist NASA in the identification of future launch vehicle requirements. Contracted analysis activities included vehicle sizing and performance analysis, subsystem concept definition, propulsion subsystem definition (foreign and domestic), ground operations and facilities analysis, and life cycle cost estimation. The basic period of performance of the TA-2 contract was from May 1992 through May 1993. No-cost extensions were exercised on the contract from June 1993 through July 1995. This document is part of the final report for the TA-2 contract. The final report consists of three volumes: Volume 1 is the Executive Summary, Volume 2 is Technical Results, and Volume 3 is Program Cost Estimates. The document-at-hand, Volume 3, provides a work breakdown structure dictionary, user's guide for the parametric life cycle cost estimation tool, and final report developed by ECON, Inc., under subcontract to Lockheed Martin on TA-2 for the analysis of heavy lift launch vehicle concepts.
NASA Astrophysics Data System (ADS)
Kiattisin, Supaporn; Chamnongthai, Kosin
Bone Mineral Density (BMD) is an indicator of osteoporosis that is an increasingly serious disease, particularly for the elderly. To calculate BMD, we need to measure the volume of the femur in a noninvasive way. In this paper, we propose a noninvasive bone volume measurement method using x-ray attenuation on radiography and medical knowledge. The absolute thickness at one reference pixel and the relative thickness at all pixels of the bone in the x-ray image are used to calculate the volume and the BMD. First, the absolute bone thickness of one particular pixel is estimated by the known geometric shape of a specific bone part as medical knowledge. The relative bone thicknesses of all pixels are then calculated by x-ray attenuation of each pixel. Finally, given the absolute bone thickness of the reference pixel, the absolute bone thickness of all pixels is mapped. To evaluate the performance of the proposed method, experiments on 300 subjects were performed. We found that the method provides good estimations of real BMD values of femur bone. Estimates shows a high linear correlation of 0.96 between the volume Bone Mineral Density (vBMD) of CT-SCAN and computed vBMD (all P<0.001). The BMD results reveal 3.23% difference in volume from the BMD of CT-SCAN.
ERIC Educational Resources Information Center
Weiner, Neil S.; And Others
The conceptual framework behind the model of aggregate U.S. Employment Service (ES) productivity is described in this report (and the companion volume of appendixes) along with the illustrative estimates of ES productivity using the model. Chapter 1 introduces the question of productivity measurement in a social purpose. Chapter 2 contains a…
ERIC Educational Resources Information Center
Institute for Interdisciplinary Studies, Minneapolis, Minn.
Eight appendixes to a final report "Alternative Federal Day Care Strategies for the 1970's" comprise this volume. The appendixes are as follows: A. References for Estimation and Evaluation of Impacts upon Children and Parents--contains a list of 292 studies, articles, and reports published between 1958 and 1971; B. Impacts of Preschool…
Noorafshan, Ali; Motamedifar, Mohammad; Karbalay-Doust, Saied
2016-01-01
Background: Morphological changes of the cells infected with rubella virus cannot be observed easily. Estimation of the size of the cultured cells can be a valuable parameter in this condition. This study was conducted to find answers to the following questions: How much time after infection with rubella virus, the volume and surface area of the Vero cells and their nuclei get started to change?How is it possible to apply stereological methods to estimate the volume and surface area of the cultured cells using the invariator, nucleator, and surfactor techniques? Methods: The cultured Vero cells were infected with rubella virus. The cells of the control and experimental groups were harvested at 2, 4, 8, 24, and 48 hours following the incubation period. The cells were processed and embedded in paraffin. Invariator, nucleator, and surfactor were applied to estimate the size of the Vero cells and their nuclei. Results: The cell volume was decreased by 15-24%, 48 hours after the infection in comparison to the non-infected cells. Besides, the cell surface area was decreased by 13%, 48 hours after the infection. However, no changes were detected in the nuclei. The values of the standard deviation and coefficient of variation of the cells, estimated by invariator, were lower compared to those measured by the nucleator or surfactor. Conclusion: In this study, the volume and surface area of the Vero cells were reduced by rubella virus 48 hours after infection. Invariator is a more precise method compared to nucleator or surfactor. PMID:26722143
Automated estimation of the volume of topographic depressions based on low quality image data
NASA Astrophysics Data System (ADS)
Rasztovits, Sascha; Dorninger, Peter; Székely, Balázs; Molnár, Gábor
2013-04-01
To compute the volume of topographic depressions, Digital Terrain Models (DTMs) are commonly used. For huge sites, DTMs are generally derived from airborne laser scanning data or from image data. For spatially limited areas (e.g. landslide monitoring), Terrestrial Laser Scanning (TLS) is commonly used as well. The achievable accuracy is highly correlated to the quality of the data. Especially structures which are not part of the DTM (e.g. vegetation) as well as shadowed areas (data holes) may reduce the resulting accuracy significantly. For many geologically relevant regions, airborne datasets are not available. Additionally, there is no possibility to use high end geodetic equipment such as TLS due to restrictions in the local infrastructure at outlying sites. In those cases, images, captured by a non-photogrammetric expert, often with restricted local possibilities (accessibility of optimal view-points, etc.), and using non-calibrated cameras, are the only data source for DTM generation. We investigated the potential of automated feature point extraction for estimating the relative orientation of the image scene. Different photogrammetric approaches (e.g. epipolar geometry and self-calibration of the cameras) were used to filter outliers in the pure matching result. The final orientation parameters were determined by bundle adjustment. The bundle adjustment provides accuracy measures of the 3D points, and consequently for the accuracy of the given envelope and/or scene. We tested our approach on a series of images showing a Lavaka, an erosional feature common in Madagascar. Such geologically interesting landscapes are typically holes on the side of hills, characterized by steep flanks. Our testing site has an extension of approximately 2,500 m² and a volume of approximately 20,000 m³. The images are taken in a complex viewpoint configuration (steep angels, less overlap). Additionally, GPS positions and north-directions are available. The automatically
NASA Astrophysics Data System (ADS)
Bogner, Simon; Rüde, Ulrich; Harting, Jens
2016-04-01
The free surface lattice Boltzmann method (FSLBM) is a combination of the hydrodynamic lattice Boltzmann method with a volume-of-fluid (VOF) interface capturing technique for the simulation of incompressible free surface flows. Capillary effects are modeled by extracting the curvature of the interface from the VOF indicator function and imposing a pressure jump at the free boundary. However, obtaining accurate curvature estimates from a VOF description can introduce significant errors. This article reports numerical results for three different surface tension models in standard test cases and compares the according errors in the velocity field (spurious currents). Furthermore, the FSLBM is shown to be suited to simulate wetting effects at solid boundaries. To this end, a new method is developed to represent wetting boundary conditions in a least-squares curvature reconstruction technique. The main limitations of the current FSLBM are analyzed and are found to be caused by its simplified advection scheme. Possible improvements are suggested.
2012-01-01
Background The paper presents a newly researched acoustic system for blood volume measurements for the developed family of Polish ventricular assist devices. The pneumatic heart-supporting devices are still the preferred solution in some cases, and monitoring of their operation, especially the temporary blood volume, is yet to be solved. Methods The prototype of the POLVAD-EXT prosthesis developed by the Foundation of Cardiac Surgery Development, Zabrze, Poland, is equipped with the newly researched acoustic blood volume measurement system based on the principle of Helmholtz’s acoustic resonance. The results of static volume measurements acquired using the acoustic sensor were verified by measuring the volume of the liquid filling the prosthesis. Dynamic measurements were conducted on the hybrid model of the human cardiovascular system at the Foundation, with the Transonic T410 (11PLX transducer - 5% uncertainty) ultrasound flow rate sensor, used as the reference. Results The statistical analysis of a series of static tests have proved that the sensor solution provides blood volume measurement results with uncertainties (understood as a standard mean deviation) of less than 10%. Dynamic tests show a high correlation between the results of the acoustic system and those obtained by flow rate measurements using an ultrasound transit time type sensor. Conclusions The results show that noninvasive, online temporary blood volume measurements in the POLVAD-EXT prosthesis, making use of the newly developed acoustic system, provides accurate static and dynamic measurements results. Conducted research provides the preliminary view on the possibility of reducing the additional sensor chamber volume in future. PMID:22998766
Estimating the body portion of CT volumes by matching histograms of visual words
NASA Astrophysics Data System (ADS)
Feulner, Johannes; Zhou, S. Kevin; Seifert, Sascha; Cavallaro, Alexander; Hornegger, Joachim; Comaniciu, Dorin
2009-02-01
Being able to automatically determine which portion of the human body is shown by a CT volume image offers various possibilities like automatic labeling of images or initializing subsequent image analysis algorithms. This paper presents a method that takes a CT volume as input and outputs the vertical body coordinates of its top and bottom slice in a normalized coordinate system whose origin and unit length are determined by anatomical landmarks. Each slice of a volume is described by a histogram of visual words: Feature vectors consisting of an intensity histogram and a SURF descriptor are first computed on a regular grid and then classified into the closest visual words to form a histogram. The vocabulary of visual words is a quantization of the feature space by offline clustering a large number of feature vectors from prototype volumes into visual words (or cluster centers) via the K-Means algorithm. For a set of prototype volumes whose body coordinates are known the slice descriptions are computed in advance. The body coordinates of a test volume are computed by a 1D rigid registration of the test volume with the prototype volumes in axial direction. The similarity of two slices is measured by comparing their histograms of visual words. Cross validation on a dataset of 44 volumes proved the robustness of the results. Even for test volumes of ca. 20cm height, the average error was 15.8mm.
Kidoh, Masafumi; Utsunomiya, Daisuke; Oda, Seitaro; Funama, Yoshinori; Yuki, Hideaki; Nakaura, Takeshi; Kai, Noriyuki; Nozaki, Takeshi; Yamashita, Yasuyuki
2015-12-01
Size-specific dose estimate (SSDE) takes into account the patient size but remains to be fully validated for adult coronary computed tomography angiography (CCTA). We investigated the appropriateness of SSDE for accurate estimation of patient dose by comparing the SSDE and the volume CT dose index (CTDIvol) in adult CCTA. This prospective study received institutional review board approval, and informed consent was obtained from each patient. We enrolled 37 adults who underwent CCTA with a 320-row CT. High-sensitivity metal oxide semiconductor field effect transistor dosimeters were placed on the anterior chest. CTDIvol reported by the scanner based on a 32-cm phantom was recorded. We measured chest diameter to convert CTDIvol to SSDE. Using linear regression, we then correlated SSDE with the mean measured skin dose. We also performed linear regression analyses between the skin dose/CTDIvol and the body mass index (BMI), and the skin dose/SSDE and BMI. There was a strong linear correlation (r = 0.93, P < 0.001) between SSDE (mean 37 ± 22 mGy) and mean skin dose (mean 17.7 ± 10 mGy). There was a moderate negative correlation between the skin dose/CTDIvol and BMI (r = 0.45, P < 0.01). The skin dose/SSDE was not affected by BMI (r = 0.06, P > 0.76). SSDE yields a more accurate estimation of the radiation dose without estimation errors attributable to the body size of adult patients undergoing CCTA. PMID:26440660
Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente
2012-11-30
An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. PMID:23102525
Hulse, R.A.
1991-08-01
Planning for storage or disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) requires characterization of that waste to estimate volumes, radionuclide activities, and waste forms. Data from existing literature, disposal records, and original research were used to estimate the characteristics and project volumes and radionuclide activities to the year 2035. GTCC LLW is categorized as: nuclear utilities waste, sealed sources waste, DOE-held potential GTCC LLW; and, other generator waste. It has been determined that the largest volume of those wastes, approximately 57%, is generated by nuclear power plants. The Other Generator waste category contributes approximately 10% of the total GTCC LLW volume projected to the year 2035. Waste held by the Department of Energy, which is potential GTCC LLW, accounts for nearly 33% of all waste projected to the year 2035; however, no disposal determination has been made for that waste. Sealed sources are less than 0.2% of the total projected volume of GTCC LLW.
NASA Astrophysics Data System (ADS)
Cook, Geoffrey W.; Wolff, John A.; Self, Stephen
2016-02-01
The 1.60 Ma caldera-forming eruption of the Otowi Member of the Bandelier Tuff produced Plinian and coignimbrite fall deposits, outflow and intracaldera ignimbrite, all of it deposited on land. We present a detailed approach to estimating and reconstructing the original volume of the eroded, partly buried large ignimbrite and distal ash-fall deposits. Dense rock equivalent (DRE) volume estimates for the eruption are 89 + 33/-10 km3 of outflow ignimbrite and 144 ± 72 km3 of intracaldera ignimbrite. Also, there was at least 65 km3 (DRE) of Plinian fall when extrapolated distally, and 107 + 40/-12 km3 of coignimbrite ash was "lost" from the outflow sheet to form an unknown proportion of the distal ash fall. The minimum total volume is 216 km3 and the maximum is 550 km3; hence, the eruption overlaps the low end of the super-eruption spectrum (VEI ˜8.0). Despite an abundance of geological data for the Otowi Member, the errors attached to these estimates do not allow us to constrain the proportions of intracaldera (IC), outflow (O), and distal ash (A) to better than a factor of three. We advocate caution in applying the IC/O/A = 1:1:1 relation of Mason et al. (2004) to scaling up mapped volumes of imperfectly preserved caldera-forming ignimbrites.
Not Available
1994-09-01
The Department of Energy`s (DOE`s) planning for the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) requires characterization of the waste. This report estimates volumes, radionuclide activities, and waste forms of GTCC LLW to the year 2035. It groups the waste into four categories, representative of the type of generator or holder of the waste: Nuclear Utilities, Sealed Sources, DOE-Held, and Other Generator. GTCC LLW includes activated metals (activation hardware from reactor operation and decommissioning), process wastes (i.e., resins, filters, etc.), sealed sources, and other wastes routinely generated by users of radioactive material. Estimates reflect the possible effect that packaging and concentration averaging may have on the total volume of GTCC LLW. Possible GTCC mixed LLW is also addressed. Nuclear utilities will probably generate the largest future volume of GTCC LLW with 65--83% of the total volume. The other generators will generate 17--23% of the waste volume, while GTCC sealed sources are expected to contribute 1--12%. A legal review of DOE`s obligations indicates that the current DOE-Held wastes described in this report will not require management as GTCC LLW because of the contractual circumstances under which they were accepted for storage. This report concludes that the volume of GTCC LLW should not pose a significant management problem from a scientific or technical standpoint. The projected volume is small enough to indicate that a dedicated GTCC LLW disposal facility may not be justified. Instead, co-disposal with other waste types is being considered as an option.
NASA Technical Reports Server (NTRS)
1990-01-01
Cost estimates for phase C/D of the laser atmospheric wind sounder (LAWS) program are presented. This information provides a framework for cost, budget, and program planning estimates for LAWS. Volume 3 is divided into three sections. Section 1 details the approach taken to produce the cost figures, including the assumptions regarding the schedule for phase C/D and the methodology and rationale for costing the various work breakdown structure (WBS) elements. Section 2 shows a breakdown of the cost by WBS element, with the cost divided in non-recurring and recurring expenditures. Note that throughout this volume the cost is given in 1990 dollars, with bottom line totals also expressed in 1988 dollars (1 dollar(88) = 0.93 1 dollar(90)). Section 3 shows a breakdown of the cost by year. The WBS and WBS dictionary are included as an attachment to this report.
NASA Astrophysics Data System (ADS)
1990-05-01
Cost estimates for phase C/D of the laser atmospheric wind sounder (LAWS) program are presented. This information provides a framework for cost, budget, and program planning estimates for LAWS. Volume 3 is divided into three sections. Section 1 details the approach taken to produce the cost figures, including the assumptions regarding the schedule for phase C/D and the methodology and rationale for costing the various work breakdown structure (WBS) elements. Section 2 shows a breakdown of the cost by WBS element, with the cost divided in non-recurring and recurring expenditures. Note that throughout this volume the cost is given in 1990 dollars, with bottom line totals also expressed in 1988 dollars (1 dollar(88) = 0.93 1 dollar(90)). Section 3 shows a breakdown of the cost by year. The WBS and WBS dictionary are included as an attachment to this report.
Cheng, Lishui; Hobbs, Robert F.; Sgouros, George; Frey, Eric C.
2014-01-01
Purpose: Three-dimensional (3D) dosimetry has the potential to provide better prediction of response of normal tissues and tumors and is based on 3D estimates of the activity distribution in the patient obtained from emission tomography. Dose–volume histograms (DVHs) are an important summary measure of 3D dosimetry and a widely used tool for treatment planning in radiation therapy. Accurate estimates of the radioactivity distribution in space and time are desirable for accurate 3D dosimetry. The purpose of this work was to develop and demonstrate the potential of penalized SPECT image reconstruction methods to improve DVHs estimates obtained from 3D dosimetry methods. Methods: The authors developed penalized image reconstruction methods, using maximum a posteriori (MAP) formalism, which intrinsically incorporate regularization in order to control noise and, unlike linear filters, are designed to retain sharp edges. Two priors were studied: one is a 3D hyperbolic prior, termed single-time MAP (STMAP), and the second is a 4D hyperbolic prior, termed cross-time MAP (CTMAP), using both the spatial and temporal information to control noise. The CTMAP method assumed perfect registration between the estimated activity distributions and projection datasets from the different time points. Accelerated and convergent algorithms were derived and implemented. A modified NURBS-based cardiac-torso phantom with a multicompartment kidney model and organ activities and parameters derived from clinical studies were used in a Monte Carlo simulation study to evaluate the methods. Cumulative dose-rate volume histograms (CDRVHs) and cumulative DVHs (CDVHs) obtained from the phantom and from SPECT images reconstructed with both the penalized algorithms and OS-EM were calculated and compared both qualitatively and quantitatively. The STMAP method was applied to patient data and CDRVHs obtained with STMAP and OS-EM were compared qualitatively. Results: The results showed that the
Krongold, Mark; Almekhlafi, Mohammed A.; Demchuk, Andrew M.; Coutts, Shelagh B.; Frayne, Richard; Eilaghi, Armin
2014-01-01
Purpose We aim to characterize infarct volume evolution within the first month post-ischemic stroke and to determine the effect of recanalization status on early infarct volume estimation. Methods Ischemic stroke patients recruited for the MONITOR and VISION studies were retrospectively screened and patients who had infarcts on diffusion-weighted imaging (DWI) at baseline and had at least two follow-up MR scans (n = 56) were included. Pre-defined target imaging time points, obtained on a 3-T MR scanner, were 12 hours (h), 24 h, 7 days, and ≥30 days post-stroke. Infarct tissue was manually traced blinded to the images at the other time points. Infarct expansion index was calculated by dividing infarct volume at each follow-up time point by the baseline DWI infarct volume. Recanalization was assessed within 24 h post-stroke. Correlation and statistical comparison analysis were done using the Spearman, Mann–Whitney, and Kruskal–Wallis tests. Results Follow-up infarct volumes were positively correlated with the baseline infarct volume (ρ > 0.81; p < 0.001) where the strongest correlation existed between baseline and 7-day post-stroke infarct volumes (ρ = 0.92; p < 0.001). The strongest correlation among the follow-up imaging was found between infarct volumes 7-day post-stroke and ≥30-day time points (ρ = 0.93; p < 0.001). Linear regression showed a close-to unity slope between 7-day and final infarct volumes (slope = 1.043; p < 0.001). Infarct expansion was higher in the non-recanalized group than the recanalized group at the 7-day (p = 0.001) and ≥30-day (p = 0.038) time points. Conclusions Final infarct volume can be approximated as early as 7 days post-stroke. Final infarct volume approximation is significantly associated with recanalization status. PMID:25429356
Båth, Magnus Svalkvist, Angelica; Söderman, Christina
2014-10-15
Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.
van Venrooij, Ger E P M; Eckhardt, Mardy D; Gisolf, Karel W H; Boon, Tom A
2002-01-01
The aim was to examine associations of filling cystometric estimated compliance, capacities, and prevalence of bladder instability with data from frequency-volume charts in a well-defined group of men with lower urinary tract symptoms (LUTS) suggestive of benign prostatic hyperplasia (BPH). Men with LUTS suggestive of BPH were included if they met the criteria of the International Consensus Committee on BPH, i.e., they voided more than 150 mL during uroflowmetry, their residual volume and prostate size were estimated, and they completed frequency-volume charts correctly. From the frequency-volume charts, voiding habits, and fluid intake in the daytime and at night were evaluated. Filling cystometric studies were performed in these men as well. Decreased compliance was an exceptional finding. Cystometric capacity and especially effective capacity (cystometric capacity minus residual volume) corresponded significantly with the maximum voided volume on the frequency-volume charts. Effective capacity was almost twice as high as the average voided volume. Minimum voided volume on frequency-volume charts was not related to filling cystometric data. The presence of instability in the supine or sitting position or in both positions was not significantly associated with smaller voided volumes, higher nocturia, or diuria. Filling cystometric capacities were strongly associated with maximal and mean voided volumes derived from frequency-volume charts. The presence of detrusor instability during filling cystometry did not significantly affect voided volumes, diuria, or nocturia PMID:11857662
Object size can influence perceived weight independent of visual estimates of the volume of material
Plaisier, Myrthe A.; Smeets, Jeroen B.J.
2015-01-01
The size-weight illusion is the phenomenon that the smaller of two equally heavy objects is perceived to be heavier than the larger object when lifted. One explanation for this illusion is that heaviness perception is influenced by our expectations, and larger objects are expected to be heavier than smaller ones because they contain more material. If this would be the entire explanation, the illusion should disappear if we make objects larger while keeping the volume of visible material the same (i.e. objects with visible holes). Here we tested this prediction. Our results show that perceived heaviness decreased with object size regardless of whether objects visibly contained the same volume of material or not. This indicates that object size can influence perceived heaviness, even when it can be seen that differently sized objects contain the same volume of material. PMID:26626051
Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R
2016-06-01
The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available. PMID:27176640
Goodenow, T.C.; Shipman, R.L.; Holland, H.M.
1995-06-01
Epoch Engineering, Incorporated (EEI) has completed a series of vibration measurements comparing their newly-developed Robust Laser Interferometer (RLI) with accelerometer-based instrumentation systems. EEI has successfully demonstrated, on several pieces of commonplace machinery, that non-contact, line-of-sight measurements are practical and yield results equal to or, in some cases, better than customary field implementations of accelerometers. The demonstration included analysis and comparison of such phenomena as nonlinearity, transverse sensitivity, harmonics, and signal-to-noise ratio. Fast Fourier Transformations were performed on the accelerometer and the laser system outputs to provide a comparison basis. The RLI was demonstrated, within the limits o the task, to be a viable, line-of-sight, non-contact alternative to accelerometer systems. Several different kinds of machinery were instrumented and. compared, including a small pump, a gear-driven cement mixer, a rotor kit, and two small fans. Known machinery vibration sources were verified and RLI system output file formats were verified to be compatible with commercial computer programs used for vibration monitoring and trend analysis. The RLI was also observed to be less subject to electromagnetic interference (EMI) and more capable at very low frequencies. This document, Volume 2, provides the appendices to this report.
IUS/TUG orbital operations and mission support study. Volume 5: Cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
The costing approach, methodology, and rationale utilized for generating cost data for composite IUS and space tug orbital operations are discussed. Summary cost estimates are given along with cost data initially derived for the IUS program and space tug program individually, and cost estimates for each work breakdown structure element.
NASA Astrophysics Data System (ADS)
Rebello, N. Sanjay
2012-02-01
Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.
NASA Astrophysics Data System (ADS)
Verkaik, A. C.; Beulen, B. W. A. M. M.; Bogaerds, A. C. B.; Rutten, M. C. M.; van de Vosse, F. N.
2009-02-01
To monitor biomechanical parameters related to cardiovascular disease, it is necessary to perform correct volume flow estimations of blood flow in arteries based on local blood velocity measurements. In clinical practice, estimates of flow are currently made using a straight-tube assumption, which may lead to inaccuracies since most arteries are curved. Therefore, this study will focus on the effect of curvature on the axial velocity profile for flow in a curved tube in order to find a new volume flow estimation method. The study is restricted to steady flow, enabling the use of analytical methods. First, analytical approximation methods for steady flow in curved tubes at low Dean numbers (Dn) and low curvature ratios (δ) are investigated. From the results a novel volume flow estimation method, the cos θ-method, is derived. Simulations for curved tube flow in the physiological range (1≤Dn≤1000 and 0.01≤δ≤0.16) are performed with a computational fluid dynamics (CFD) model. The asymmetric axial velocity profiles of the analytical approximation methods are compared with the velocity profiles of the CFD model. Next, the cos θ-method is validated and compared with the currently used Poiseuille method by using the CFD results as input. Comparison of the axial velocity profiles of the CFD model with the approximations derived by Topakoglu [J. Math. Mech. 16, 1321 (1967)] and Siggers and Waters [Phys. Fluids 17, 077102 (2005)] shows that the derived velocity profiles agree very well for Dn≤50 and are fair for 50
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Astrophysics Data System (ADS)
Peffley, Al F.
1991-04-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Oldrini, Guillaume; Harter, Valentin; Witte, Yannick; Martrille, Laurent; Blum, Alain
2016-01-01
Age estimation is commonly of interest in a judicial context. In adults, it is less documented than in children. The aim of this study was to evaluate age estimation in adults using CT images of the sternal plastron with volume rendering technique (VRT). The evaluation criteria are derived from known methods used for age estimation and are applicable in living or dead subjects. The VRT images of 456 patients were analyzed. Two radiologists performed age estimation independently from an anterior view of the plastron. Interobserver agreement and correlation coefficients between each reader's classification and real age were calculated. The interobserver agreement was 0.86, and the correlation coefficients between readers classifications and real age classes were 0.60 and 0.65. Spearman correlation coefficients were, respectively, 0.89, 0.67, and 0.71. Analysis of the plastron using VRT allows age estimation in vivo quickly and with results similar than methods such as Iscan, Suchey-Brooks, and radiographs used to estimate the age of death. PMID:27092960
A New, Effective and Low-Cost Three-Dimensional Approach for the Estimation of Upper-Limb Volume
Buffa, Roberto; Mereu, Elena; Lussu, Paolo; Succa, Valeria; Pisanu, Tonino; Buffa, Franco; Marini, Elisabetta
2015-01-01
The aim of this research was to validate a new procedure (SkanLab) for the three-dimensional estimation of total arm volume. SkanLab is based on a single structured-light Kinect sensor (Microsoft, Redmond, WA, USA) and on Skanect (Occipital, San Francisco, CA, USA) and MeshLab (Visual Computing Lab, Pisa, Italy) software. The volume of twelve plastic cylinders was measured using geometry, as the reference, water displacement and SkanLab techniques (two raters and repetitions). The right total arm volume of thirty adults was measured by water displacement (reference) and SkanLab (two raters and repetitions). The bias and limits of agreement (LOA) between techniques were determined using the Bland–Altman method. Intra- and inter-rater reliability was assessed using the intraclass correlation coefficient (ICC) and the standard error of measurement. The bias of SkanLab in measuring the cylinders volume was −21.9 mL (−5.7%) (LOA: −62.0 to 18.2 mL; −18.1% to 6.7%) and in measuring the volume of arms’ was −9.9 mL (−0.6%) (LOA: −49.6 to 29.8 mL; −2.6% to 1.4%). SkanLab’s intra- and inter-rater reliabilities were very high (ICC >0.99). In conclusion, SkanLab is a fast, safe and low-cost method for assessing total arm volume, with high levels of accuracy and reliability. SkanLab represents a promising tool in clinical applications. PMID:26016917
A new, effective and low-cost three-dimensional approach for the estimation of upper-limb volume.
Buffa, Roberto; Mereu, Elena; Lussu, Paolo; Succa, Valeria; Pisanu, Tonino; Buffa, Franco; Marini, Elisabetta
2015-01-01
The aim of this research was to validate a new procedure (SkanLab) for the three-dimensional estimation of total arm volume. SkanLab is based on a single structured-light Kinect sensor (Microsoft, Redmond, WA, USA) and on Skanect (Occipital, San Francisco, CA, USA) and MeshLab (Visual Computing Lab, Pisa, Italy) software. The volume of twelve plastic cylinders was measured using geometry, as the reference, water displacement and SkanLab techniques (two raters and repetitions). The right total arm volume of thirty adults was measured by water displacement (reference) and SkanLab (two raters and repetitions). The bias and limits of agreement (LOA) between techniques were determined using the Bland-Altman method. Intra- and inter-rater reliability was assessed using the intraclass correlation coefficient (ICC) and the standard error of measurement. The bias of SkanLab in measuring the cylinders volume was -21.9 mL (-5.7%) (LOA: -62.0 to 18.2 mL; -18.1% to 6.7%) and in measuring the volume of arms' was -9.9 mL (-0.6%) (LOA: -49.6 to 29.8 mL; -2.6% to 1.4%). SkanLab's intra- and inter-rater reliabilities were very high (ICC >0.99). In conclusion, SkanLab is a fast, safe and low-cost method for assessing total arm volume, with high levels of accuracy and reliability. SkanLab represents a promising tool in clinical applications. PMID:26016917
NASA Technical Reports Server (NTRS)
Chin, M. M.; Goad, C. C.; Martin, T. V.
1972-01-01
A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.
Xie, Wen-Jia; Wu, Xiao; Xue, Ren-Liang; Lin, Xiang-Ying; Kidd, Elizabeth A.; Yan, Shu-Mei; Zhang, Yao-Hong; Zhai, Tian-Tian; Lu, Jia-Yang; Wu, Li-Li; Zhang, Hao; Huang, Hai-Hua; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
Purpose: To more accurately define clinical target volume for cervical cancer radiation treatment planning by evaluating tumor microscopic extension toward the uterus body (METU) in International Federation of Gynecology and Obstetrics stage Ib-IIa squamous cell carcinoma of the cervix (SCCC). Patients and Methods: In this multicenter study, surgical resection specimens from 318 cases of stage Ib-IIa SCCC that underwent radical hysterectomy were included. Patients who had undergone preoperative chemotherapy, radiation, or both were excluded from this study. Microscopic extension of primary tumor toward the uterus body was measured. The association between other pathologic factors and METU was analyzed. Results: Microscopic extension toward the uterus body was not common, with only 12.3% of patients (39 of 318) demonstrating METU. The mean (±SD) distance of METU was 0.32 ± 1.079 mm (range, 0-10 mm). Lymphovascular space invasion was associated with METU distance and occurrence rate. A margin of 5 mm added to gross tumor would adequately cover 99.4% and 99% of the METU in the whole group and in patients with lymphovascular space invasion, respectively. Conclusion: According to our analysis of 318 SCCC specimens for METU, using a 5-mm gross tumor volume to clinical target volume margin in the direction of the uterus should be adequate for International Federation of Gynecology and Obstetrics stage Ib-IIa SCCC. Considering the discrepancy between imaging and pathologic methods in determining gross tumor volume extent, we recommend a safer 10-mm margin in the uterine direction as the standard for clinical practice when using MRI for contouring tumor volume.
A method for estimating both the solubility parameters and molar volumes of liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1974-01-01
Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.
Tug fleet and ground operations schedules and controls. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
Cost data for the tug DDT&E and operations phases are presented. Option 6 is the recommended option selected from seven options considered and was used as the basis for ground processing estimates. Option 6 provides for processing the tug in a factory clean environment in the low bay area of VAB with subsequent cleaning to visibly clean. The basis and results of the trade study to select Option 6 processing plan is included. Cost estimating methodology, a work breakdown structure, and a dictionary of WBS definitions is also provided.
White matter atlas of the human spinal cord with estimation of partial volume effect.
Lévy, S; Benhamou, M; Naaman, C; Rainville, P; Callot, V; Cohen-Adad, J
2015-10-01
Template-based analysis has proven to be an efficient, objective and reproducible way of extracting relevant information from multi-parametric MRI data. Using common atlases, it is possible to quantify MRI metrics within specific regions without the need for manual segmentation. This method is therefore free from user-bias and amenable to group studies. While template-based analysis is common procedure for the brain, there is currently no atlas of the white matter (WM) spinal pathways. The goals of this study were: (i) to create an atlas of the white matter tracts compatible with the MNI-Poly-AMU template and (ii) to propose methods to quantify metrics within the atlas that account for partial volume effect. The WM atlas was generated by: (i) digitalizing an existing WM atlas from a well-known source (Gray's Anatomy), (ii) registering this atlas to the MNI-Poly-AMU template at the corresponding slice (C4 vertebral level), (iii) propagating the atlas throughout all slices of the template (C1 to T6) using regularized diffeomorphic transformations and (iv) computing partial volume values for each voxel and each tract. Several approaches were implemented and validated to quantify metrics within the atlas, including weighted-average and Gaussian mixture models. Proof-of-concept application was done in five subjects for quantifying magnetization transfer ratio (MTR) in each tract of the atlas. The resulting WM atlas showed consistent topological organization and smooth transitions along the rostro-caudal axis. The median MTR across tracts was 26.2. Significant differences were detected across tracts, vertebral levels and subjects, but not across laterality (right-left). Among the different tested approaches to extract metrics, the maximum a posteriori showed highest performance with respect to noise, inter-tract variability, tract size and partial volume effect. This new WM atlas of the human spinal cord overcomes the biases associated with manual delineation and partial
Glacier Volume Change Estimation Using Time Series of Improved Aster Dems
NASA Astrophysics Data System (ADS)
Girod, Luc; Nuth, Christopher; Kääb, Andreas
2016-06-01
Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Estimating removal rates of bacteria from poultry carcasses using two whole-carcass rinse volumes
Technology Transfer Automated Retrieval System (TEKTRAN)
Rinse sampling is a common method for determining the level of microbial contamination on poultry carcasses. One of the advantages of rinse sampling, over other carcass sampling methods, is that the results can be used for both process control applications and to estimate the total microbial level o...
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data is described. The method estimates the installed performance of aircraft gas turbine engines. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag.
NASA Technical Reports Server (NTRS)
Daly, J. K.
1974-01-01
The programming techniques used to implement the equations and mathematical techniques of the Houston Operations Predictor/Estimator (HOPE) orbit determination program on the UNIVAC 1108 computer are described. Detailed descriptions are given of the program structure, the internal program structure, the internal program tables and program COMMON, modification and maintainence techniques, and individual subroutine documentation.
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. A user oriented description of the program input requirements, program output, deck setup, and operating instructions is presented.
NASA Astrophysics Data System (ADS)
Pandey, Apoorva; Venkataraman, Chandra
2014-12-01
Urbanization and rising household incomes in India have led to growing transport demand, particularly during 1990-2010. Emissions from transportation have been implicated in air quality and climate effects. In this work, emissions of particulate matter (PM2.5 or mass concentration of particles smaller than 2.5 um diameter), black carbon (BC) and organic carbon (OC), were estimated from the transport sector in India, using detailed technology divisions and regionally measured emission factors. Modes of transport addressed in this work include road transport, railways, shipping and aviation, but exclude off-road equipment like diesel machinery and tractors. For road transport, a vehicle fleet model was used, with parameters derived from vehicle sales, registration data, and surveyed age-profile. The fraction of extremely high emitting vehicles, or superemitters, which is highly uncertain, was assumed as 20%. Annual vehicle utilization estimates were based on regional surveys and user population. For railways, shipping and aviation, a top-down approach was applied, using nationally reported fuel consumption. Fuel use and emissions from on-road vehicles were disaggregated at the state level, with separate estimates for 30 cities in India. The on-road fleet was dominated by two-wheelers, followed by four-and three-wheelers, with new vehicles comprising the majority of the fleet for each vehicle type. A total of 276 (-156, 270) Gg/y PM2.5, 144 (-99, 207) Gg/y BC, and 95 (-64, 130) Gg/y OC emissions were estimated, with over 97% contribution from on-road transport. Largest emitters were identified as heavy duty diesel vehicles for PM2.5 and BC, but two-stroke vehicles and superemitters for OC. Old vehicles (pre-2005) contributed significantly more (∼70%) emissions, while their share in the vehicle fleet was smaller (∼45%). Emission estimates were sensitive to assumed superemitter fraction. Improvement of emission estimates requires on-road emission factor measurements
Programmatic methods for addressing contaminated volume uncertainties
Rieman, C.R.; Spector, H.L.; Durham, L.A.; Johnson, R.L.
2007-07-01
Accurate estimates of the volumes of contaminated soils or sediments are critical to effective program planning and to successfully designing and implementing remedial actions. Unfortunately, data available to support the pre-remedial design are often sparse and insufficient for accurately estimating contaminated soil volumes, resulting in significant uncertainty associated with these volume estimates. The uncertainty in the soil volume estimates significantly contributes to the uncertainty in the overall project cost estimates, especially since excavation and off-site disposal are the primary cost items in soil remedial action projects. The U.S. Army Corps of Engineers Buffalo District's experience has been that historical contaminated soil volume estimates developed under the Formerly Utilized Sites Remedial Action Program (FUSRAP) often underestimated the actual volume of subsurface contaminated soils requiring excavation during the course of a remedial activity. In response, the Buffalo District has adopted a variety of programmatic methods for addressing contaminated volume uncertainties. These include developing final status survey protocols prior to remedial design, explicitly estimating the uncertainty associated with volume estimates, investing in pre-design data collection to reduce volume uncertainties, and incorporating dynamic work strategies and real-time analytics in pre-design characterization and remediation activities. This paper describes some of these experiences in greater detail, drawing from the knowledge gained at Ashland 1, Ashland 2, Linde, and Rattlesnake Creek. In the case of Rattlesnake Creek, these approaches provided the Buffalo District with an accurate pre-design contaminated volume estimate and resulted in one of the first successful FUSRAP fixed-price remediation contracts for the Buffalo District. (authors)
NASA Astrophysics Data System (ADS)
Scherbaum, Frank; Wyss, Max
1990-08-01
A new method to simultaneously invert for Q structure and source parameters was used on a set of 635 microearthquakes (0.9 < M < 2.0) in the Kaoiki area of southern Hawaii. Approximately 2800 signals were analyzed which had been recorded by 6 short period vertical seismographs at epicentral distances of a few to 10 km. The hypocentral depths ranged from O to 14 km, with the bulk of the sources in the 7.5-10.5 km range. The hypothesis to be tested was that the source volume of the M = 6.6 Kaoiki main shock of November 16, 1983, may be heterogeneous in attenuation distribution. We assumed that the observed P wave displacement spectra could be modelled by a source spectrum with an ω-2 high-frequency decay, a single-layer resonance filler to account for local site resonances and whole path attenuation along the ray path. In a next step the attenuation factor Q was constrained by tomographically reconstructing the three-dimensional Q structure for the source region and using it as starting model for a nonlinear inversion of the corner frequency, the seismic moment M0, and a new Q value. This process was iterated until the results changed less than 0.1% and were accepted as final. The average Q was approximately constant and very low (105volume were larger by approximately 10-15% compared to those in the SE part. This 12% contrast correlates with other evidence for heterogeneity. The NW region shows high coda-Q, no surface faulting, medium seismicity rate, and medium precursory quiescence, while the SE region shows low coda-Q, pervasive faulting, high seismicity rate, strong precursory quiescence, and possibly low b values. We conclude that the Kaoiki source volume is strongly heterogeneous with the contrasting density of cracks controlling the difference in crustal
NASA Technical Reports Server (NTRS)
Martin, T. V.; Mullins, N. E.
1972-01-01
The operating and set-up procedures for the multi-satellite, multi-arc GEODYN- Orbit Determination program are described. All system output is analyzed. The GEODYN Program is the nucleus of the entire GEODYN system. It is a definitive orbit and geodetic parameter estimation program capable of simultaneously processing observations from multiple arcs of multiple satellites. GEODYN has two modes of operation: (1) the data reduction mode and (2) the orbit generation mode.
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
Xu, Ming; Lei, Zhipeng; Yang, James
2015-01-01
N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85. PMID:25800663
Kjelstrom, L.C.; Berenbrock, C.
1996-12-31
The purpose of this report is to provide estimates of the 100-year peak flows and flow volumes that could enter the INEL area from the Big Lost River and Brich Creek are needed as input data for models that will be used to delineate the extent of the 100-year flood plain at the INEL. The methods, procedures and assumptions used to estimate the 100-year peak flows and flow volumes are described in this report.
NASA Astrophysics Data System (ADS)
Gusyev, Maksym; Yamazaki, Yusuke; Morgenstern, Uwe; Stewart, Mike; Kashiwaya, Kazuhisa; Hirai, Yasuyuki; Kuribayashi, Daisuke; Sawano, Hisaya
2015-04-01
The goal of this study is to estimate subsurface water transit times and volumes in headwater catchments of Hokkaido, Japan, using the New Zealand high-accuracy tritium analysis technique. Transit time provides insights into the subsurface water storage and therefore provides a robust and quick approach to quantifying the subsurface groundwater volume. Our method is based on tritium measurements in river water. Tritium is a component of meteoric water, decays with a half-life of 12.32 years, and is inert in the subsurface after the water enters the groundwater system. Therefore, tritium is ideally suited for characterization of the catchment's responses and can provide information on mean water transit times up to 200 years. Only in recent years has it become possible to use tritium for dating of stream and river water, due to the fading impact of the bomb-tritium from thermo-nuclear weapons testing, and due to improved measurement accuracy for the extremely low natural tritium concentrations. Transit time of the water discharge is one of the most crucial parameters for understanding the response of catchments and estimating subsurface water volume. While many tritium transit time studies have been conducted in New Zealand, only a limited number of tritium studies have been conducted in Japan. In addition, the meteorological, orographic and geological conditions of Hokkaido Island are similar to those in parts of New Zealand, allowing for comparison between these regions. In 2014, three field trips were conducted in Hokkaido in June, July and October to sample river water at river gauging stations operated by the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). These stations have altitudes between 36 m and 860 m MSL and drainage areas between 45 and 377 km2. Each sampled point is located upstream of MLIT dams, with hourly measurements of precipitation and river water levels enabling us to distinguish between the snow melt and baseflow contributions
Estimating lesion volume in low-dose chest CT: How low can we go?
NASA Astrophysics Data System (ADS)
Young, Stefano; McNitt-Gray, Michael F.
2014-03-01
Purpose: To examine the potential for dose reduction in chest CT studies where lesion volume is the primary output (e.g. in therapy-monitoring applications). Methods: We added noise to the raw sinogram data from 15 chest exams with lung lesions to simulate a series of reduced-dose scans for each patient. We reconstructed the reduced-dose data on the clinical workstation and imported the resulting image series into our quantitative imaging database for lesion contouring. One reader contoured the lesions (one per patient) at the clinical reference dose (100%) and 8 simulated fractions of the clinical dose (50, 25, 15, 10, 7, 5, 4, and 3%). Dose fractions were hidden from the reader to reduce bias. We compared clinical and reduced-dose volumes in terms of bias error and variability (4x the standard deviation of the percent differences). Results: Averaging over all lesions, the bias error ranged from -0.6% to 10.6%. Variability ranged from 92% at 3% of clinical dose to 54% at 50% of clinical dose. Averaging over only the smaller lesions (<1cm equivalent diameter), bias error ranged from -9.2% to 14.1% and variability ranged from 125% at 3% dose to 33.9% at 50% dose. Conclusions: The reader's variability decreased with dose, especially for smaller lesions. However, these preliminary results are limited by potential recall bias, a small patient cohort, and an overly-simplified task. Therapy monitoring often involves checking for new lesions, which may influence the reader's clinical dose threshold for acceptable performance.
Sherwood, J.M.
1993-01-01
Methods are presented to estimate peak-frequency relations, flood hydrographs, and volume-duration-frequency relations of urban streams in Ohio with drainage areas less than 6.5 square miles. The methods were developed to assist planners in the design of hydraulic structures for which hydrograph routing is required or where the temporary storage of water is an important element of the design criteria. Examples of how to use the methods also are presented. The data base for the analyses consisted of 5-minute rainfall-runoff data collected for a period of 5 to 8 years at 62 small drainage basins distributed throughout Ohio. The U.S. Geological Survey rainfall-runoff model A634 was used and was calibrated for each site. The calibrayed models were used in conjunction with long-term (66-87 years) rainfall and evaporation records to synthesize a long-term series of flood-hydrograph records at each site. A method was developed and used to increase the variance of the synthetic flood characterictics in order to make them more representative of observed flood characteristics. Multiple-regression equations were developed to estimate peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The explanatory variables in the peak-discharge equations are drainage area, average annual precipitation, and basin development factor. Average standard errors of prediction for the peak-frequency equations range from ? 34 to ? 40 percent. A method is presented to estimate flood hydrographs by applying a specific peak discharge and basin lagtime to a dimensionless hydrograph. An equation was developed to estimate basin lagtime in which main-channel length divided by the square root of the main-channel slope (L/SL) and basin-development factor are the explanatory variables and the average standard error of prediction is ? 53 percent. A dimensional hydrograph originally developed by the U.S. Geological Survey for use in Georgia was verified for use in urban areas of
Orbital Spacecraft Consumables Resupply System (OSCRS). Volume 3: Program Cost Estimate
NASA Technical Reports Server (NTRS)
Perry, D. L.
1986-01-01
A cost analysis for the design, development, qualification, and production of the monopropellant and bipropellant Orbital Spacecraft Consumable Resupply System (OSCRS) tankers, their associated avionics located in the Orbiter payload bay, and the unique ground support equipment (GSE) and airborne support equipment (ASE) required to support operations is presented. Monopropellant resupply for the Gamma Ray Observatory (GRO) in calendar year 1991 is the first defined resupply mission with bipropellant resupply missions expected in the early to mid 1990's. The monopropellant program estimate also includes contractor costs associated with operations support through the first GRO resupply mission.
Budget estimates: Fiscal year 1994. Volume 3: Research and program management
NASA Technical Reports Server (NTRS)
1994-01-01
The research and program management (R&PM) appropriation provides the salaries, other personnel and related costs, and travel support for NASA's civil service workforce. This FY 1994 budget funds costs associated with 23,623 full-time equivalent (FTE) work years. Budget estimates are provided for all NASA centers by categories such as space station and new technology investments, space flight programs, space science, life and microgravity sciences, advanced concepts and technology, center management and operations support, launch services, mission to planet earth, tracking and data programs, aeronautical research and technology, and safety, reliability, and quality assurance.
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.
NASA Astrophysics Data System (ADS)
Gates, W. R.
1983-02-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. The fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R&D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), dependng on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest.
NASA Technical Reports Server (NTRS)
Gates, W. R.
1983-01-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. The fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R&D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), dependng on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest.
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237
Sidle, John E.; Wamalwa, Emmanuel S.; Okumu, Thomas O.; Bryant, Kendall L.; Goulet, Joseph L.; Maisto, Stephen A.; Braithwaite, R. Scott; Justice, Amy C.
2010-01-01
Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang’aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang’aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang’aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang’aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard. PMID:19015972
Gates, W.R.
1983-02-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. Three fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R and D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), depending on the system cost and fuel price scenario. Normal R and D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artifically manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R and D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest. Analysis is also provided regarding two federal incentives currently in use: the Federal Business Energy Tax Credit and direct R and D funding. These mecahnisms can be expected to provide the required incentives to establish a viable self-sustaining private STT industry. Discussions of STT impacts on the environment and oil imports are also included.
Estimating the Cold War mortgage: The 1995 baseline environmental management report. Volume 1
1995-03-01
This is the first annual report on the activities and potentials costs required to address the waste, contamination, and surplus nuclear facilities that are the responsibility of the Department of Energy`s Environmental Management program. The Department`s Office of Environmental Management, established in 1989, manages one of the largest environmental programs in the world--with more than 130 sites and facilities in over 30 States and territories. The primary focus of the program is to reduce health and safety risks from radioactive waste and contamination resulting from the production, development, and testing of nuclear weapons. The program also is responsible for the environmental legacy from, and ongoing waste management for, nuclear energy research and development, and basic science research. In an attempt to better oversee this effort, Congress required the Secretary of Energy to submit a Baseline Environmental Management Report with annual updates. The 1995 Baseline Environmental Management Report provides life-cycle cost estimates, tentative schedules, and projected activities necessary to complete the Environmental Management program.
He, Bin; Frey, Eric C.
2010-01-01
Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ
NASA Astrophysics Data System (ADS)
Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.
2016-01-01
Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.
Gamble, C.R.
1989-01-01
A dimensionless hydrograph developed for a variety of basin conditions in Georgia was tested for its applicability to streams in East and West Tennessee by comparing it to a similar dimensionless hydrograph developed for streams in East and West Tennessee. Hydrographs of observed discharge at 83 streams in East Tennessee and 38 in West Tennessee were used in the study. Statistical analyses were performed by comparing simulated (or computed) hydrographs, derived by application of the Georgia dimensionless hydrograph, and dimensionless hydrographs developed from Tennessee data, with the observed hydrographs at 50 and 75% of their peak-flow widths. Results of the tests indicate that the Georgia dimensionless hydrography is virtually the same as the one developed for streams in East Tennessee, but that it is different from the dimensionless hydrograph developed for streams in West Tennessee. Because of the extensive testing of the Georgia dimensionless hydrograph, it was determined to be applicable for East Tennessee, whereas the dimensionless hydrograph developed from data on streams in West Tennessee was determined to be applicable in West Tennessee. As part of the dimensionless hydrograph development, an average lagtime in hours for each study basin, and the volume in inches of flood runoff for each flood event were computed. By use of multiple-regression analysis, equations were developed that relate basin lagtime to drainage area size, basin length, and percent impervious area. Similarly, flood volumes were related to drainage area size, peak discharge, and basin lagtime. These equations, along with the appropriate dimensionless hydrograph, can be used to estimate a typical (average) flood hydrograph and volume for recurrence-intervals up to 100 years at any ungaged site draining less than 50 sq mi in East and West Tennessee. (USGS)
NASA Astrophysics Data System (ADS)
Ponomarenko, P. V.; St-Maurice, J.-P.; Waters, C. L.; Gillies, R. G.; Koustov, A. V.
2009-11-01
Ionospheric E×B plasma drift velocities derived from the Super Dual Auroral Radar Network (SuperDARN) Doppler data exhibit systematically smaller (by 20-30%) magnitudes than those measured by the Defence Meteorological Satellites Program (DMSP) satellites. A part of the disagreement was previously attributed to the change in the E/B ratio due to the altitude difference between the satellite orbit and the location of the effective scatter volume for the radar signals. Another important factor arises from the free-space propagation assumption used in converting the measured Doppler frequency shift into the line-of-sight velocity. In this work, we have applied numerical ray-tracing to identify the location of the effective scattering volume of the ionosphere and to estimate the ionospheric refractive index. The simulations show that the major contribution to the radar echoes should be provided by the Pedersen and/or escaping rays that are scattered in the vicinity of the F-layer maximum. This conclusion is supported by a statistical analysis of the experimental elevation angle data, which have a signature consistent with scattering from the F-region peak. A detailed analysis of the simulations has allowed us to propose a simple velocity correction procedure, which we have successfully tested against the SuperDARN/DMSP comparison data set.
Hogrel, Jean-Yves; Barnouin, Yoann; Azzabou, Noura; Butler-Browne, Gillian; Voit, Thomas; Moraux, Amélie; Leroux, Gaëlle; Behin, Anthony; McPhee, Jamie S; Carlier, Pierre G
2015-06-01
Muscle mass is particularly relevant to follow during aging, owing to its link with physical performance and autonomy. The objectives of this work were to assess muscle volume (MV) and intramuscular fat (IMF) for all the muscles of the thigh in a large population of young and elderly healthy individuals using magnetic resonance imaging (MRI) to test the effect of gender and age on MV and IMF and to determine the best representative slice for the estimation of MV and IMF. The study enrolled 105 healthy young (range 20-30 years) and older (range 70-80 years) subjects. MRI scans were acquired along the femur length using a three-dimension three-point Dixon proton density-weighted gradient echo sequence. MV and IMF were estimated from all the slices. The effects of age and gender on MV and IMF were assessed. Predictive equations for MV and IMF were established using a single slice at various femur levels for each muscle in order to reduce the analysis process. MV was decreased with aging in both genders, particularly in the quadriceps femoris. IMF was largely increased with aging in men and, to a lesser extent, in women. Percentages of MV decrease and IMF increase with aging varied according to the muscle. Predictive equations to predict MV and IMF from single slices are provided and were validated. This study is the first one to provide muscle volume and intramuscular fat infiltration in all the muscles of the thigh in a large population of young and elderly healthy subjects. PMID:26040416
NASA Astrophysics Data System (ADS)
Charbonneau, David; Harps-N Collaboration
2015-01-01
Although the NASA Kepler Mission has determined the physical sizes of hundreds of small planets, and we have in many cases characterized the star in detail, we know virtually nothing about the planetary masses: There are only 7 planets smaller than 2.5 Earth radii for which there exist published mass estimates with a precision better than 20 percent, the bare minimum value required to begin to distinguish between different models of composition.HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. We have 80 nights of guaranteed time per year, of which half are dedicated to the study of small Kepler planets.In preparation for the 2014 season, we compared all available Kepler Objects of Interest to identify the ones for which our 40 nights could be used most profitably. We analyzed the Kepler light curves to constrain the stellar rotation periods, the lifetimes of active regions on the stellar surface, and the noise that would result in our radial velocities. We assumed various mass-radius relations to estimate the observing time required to achieve a mass measurement with a precision of 15%, giving preference to stars that had been well characterized through asteroseismology. We began by monitoring our long list of targets. Based on preliminary results we then selected our final short list, gathering typically 70 observations per target during summer 2014.These resulting mass measurements will have a signifcant impact on our understanding of these so-called super-Earths and small Neptunes. They would form a core dataset with which the international astronomical community can meaningfully seek to understand these objects and their formation in a quantitative fashion.HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Tang, Robert Y.; McDonald, Nancy Laamanen, Curtis; LeClair, Robert J.
2014-11-01
Purpose: To develop a method to estimate the mean fractional volume of fat (ν{sup ¯}{sub fat}) within a region of interest (ROI) of a tissue sample for wide-angle x-ray scatter (WAXS) applications. A scatter signal from the ROI was obtained and use of ν{sup ¯}{sub fat} in a WAXS fat subtraction model provided a way to estimate the differential linear scattering coefficient μ{sub s} of the remaining fatless tissue. Methods: The efficacy of the method was tested using animal tissue from a local butcher shop. Formalin fixed samples, 5 mm in diameter 4 mm thick, were prepared. The two main tissue types were fat and meat (fibrous). Pure as well as composite samples consisting of a mixture of the two tissue types were analyzed. For the latter samples, ν{sub fat} for the tissue columns of interest were extracted from corresponding pixels in CCD digital x-ray images using a calibration curve. The means ν{sup ¯}{sub fat} were then calculated for use in a WAXS fat subtraction model. For the WAXS measurements, the samples were interrogated with a 2.7 mm diameter 50 kV beam and the 6° scattered photons were detected with a CdTe detector subtending a solid angle of 7.75 × 10{sup −5} sr. Using the scatter spectrum, an estimate of the incident spectrum, and a scatter model, μ{sub s} was determined for the tissue in the ROI. For the composite samples, a WAXS fat subtraction model was used to estimate the μ{sub s} of the fibrous tissue in the ROI. This signal was compared to μ{sub s} of fibrous tissue obtained using a pure fibrous sample. Results: For chicken and beef composites, ν{sup ¯}{sub fat}=0.33±0.05 and 0.32 ± 0.05, respectively. The subtractions of these fat components from the WAXS composite signals provided estimates of μ{sub s} for chicken and beef fibrous tissue. The differences between the estimates and μ{sub s} of fibrous obtained with a pure sample were calculated as a function of the momentum transfer x. A t-test showed that the mean of the
Hayward, R.K.
2011-01-01
The Mars Global Digital Dune Database (MGD3) now extends from 90??N to 65??S. The recently released north polar portion (MC-1) of MGD3 adds ~844 000km2 of moderate- to large-size dark dunes to the previously released equatorial portion (MC-2 to MC-29) of the database. The database, available in GIS- and tabular-format in USGS Open-File Reports, makes it possible to examine global dune distribution patterns and to compare dunes with other global data sets (e.g. atmospheric models). MGD3 can also be used by researchers to identify areas suitable for more focused studies. The utility of MGD3 is demonstrated through three example applications. First, the uneven geographic distribution of the dunes is discussed and described. Second, dune-derived wind direction and its role as ground truth for atmospheric models is reviewed. Comparisons between dune-derived winds and global and mesoscale atmospheric models suggest that local topography may have an important influence on dune-forming winds. Third, the methods used here to estimate north polar dune volume are presented and these methods and estimates (1130km3 to 3250km3) are compared with those of previous researchers (1158km3 to 15 000km3). In the near future, MGD3 will be extended to include the south polar region. ?? 2011 by John Wiley and Sons, Ltd.
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2011 CFR
2011-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
Star, Hazha; Thevissen, Patrick; Jacobs, Reinhilde; Fieuws, Steffen; Solheim, Tore; Willems, Guy
2011-01-01
Secondary dentine is responsible for a decrease in the volume of the dental pulp cavity with aging. The aim of this study is to evaluate a human dental age estimation method based on the ratio between the volume of the pulp and the volume of its corresponding tooth, calculated on clinically taken cone beam computed tomography (CBCT) images from monoradicular teeth. On the 3D images of 111 clinically obtained CBCT images (Scanora(®) 3D dental cone beam unit) of 57 female and 54 male patients ranging in age between 10 and 65 years, the pulp-tooth volume ratio of 64 incisors, 32 canines, and 15 premolars was calculated with Simplant(®) Pro software. A linear regression model was fit with age as dependent variable and ratio as predictor, allowing for interactions of specific gender or tooth type. The obtained pulp-tooth volume ratios were the strongest related to age on incisors. PMID:21182523
Jacob J. Jacobson; Erin Searcy; Md. S. Roni; Sandra D. Eksioglu
2014-06-01
This article analyzes rail transportation costs of products that have similar physical properties as densified biomass and biofuel. The results of this cost analysis are useful to understand the relationship and quantify the impact of a number of factors on rail transportation costs of denisfied biomass and biofuel. These results will be beneficial and help evaluate the economic feasibility of high-volume and long-haul transportation of biomass and biofuel. High-volume and long-haul rail transportation of biomass is a viable transportation option for biofuel plants, and for coal plants which consider biomass co-firing. Using rail optimizes costs, and optimizes greenhouse gas (GHG) emissions due to transportation. Increasing bioenergy production would consequently result in lower GHG emissions due to displacing fossil fuels. To estimate rail transportation costs we use the carload waybill data, provided by Department of Transportation’s Surface Transportation Board for products such as grain and liquid type commodities for 2009 and 2011. We used regression analysis to quantify the relationship between variable transportation unit cost ($/ton) and car type, shipment size, rail movement type, commodity type, etc. The results indicate that: (a) transportation costs for liquid is $2.26/ton–$5.45/ton higher than grain type commodity; (b) transportation costs in 2011 were $1.68/ton–$5.59/ton higher than 2009; (c) transportation costs for single car shipments are $3.6/ton–$6.68/ton higher than transportation costs for multiple car shipments of grains; (d) transportation costs for multiple car shipments are $8.9/ton and $17.15/ton higher than transportation costs for unit train shipments of grains.
Azarm, M.A.; Hsu, F.; Martinez-Guridi, G.; Vesely, W.E.
1993-07-01
This report introduces a new perspective on the basic concept of dependent failures where the definition of dependency is based on clustering in failure times of similar components. This perspective has two significant implications: firstly, it relaxes the conventional assumption that dependent failures must be simultaneous and result from a severe shock; secondly, it allows the analyst to use all the failures in a time continuum to estimate the potential for multiple failures in a window of time (e.g., a test interval), therefore arriving at a more accurate value for system unavailability. In addition, the models developed here provide a method for plant-specific analysis of dependency, reflecting the plant-specific maintenance practices that reduce or increase the contribution of dependent failures to system unavailability. The proposed methodology can be used for screening analysis of failure data to estimate the fraction of dependent failures among the failures. In addition, the proposed method can evaluate the impact of the observed dependency on the system unavailability and plant risk. The formations derived in this report have undergone various levels of validations through computer simulation studies and pilot applications. The pilot applications of these methodologies showed that the contribution of dependent failures of diesel generators in one plant was negligible, while in another plant, it was quite significant. It also showed that in the plant with significant contribution of dependency to Emergency Power System (ESP) unavailability, the contribution changed with time. Similar findings were reported for the Containment Fan Cooler breakers. Drawing such conclusions about system performance would not have been possible with any other reported dependency methodologies.
NASA Astrophysics Data System (ADS)
Trofymow, J. A.; Coops, N.; Hayhurst, D.
2012-12-01
Following forest harvest, residues left on site and roadsides are often disposed of to reduce fire risk and free planting space. In coastal British Columbia burn piles are the main method of disposal, particularly for accumulations from log processing. Quantification of residue wood in piles is required for: smoke emission estimates, C budget calculations, billable waste assessment, harvest efficiency monitoring, and determination of bioenergy potentials. A second-growth Douglas-fir dominated (DF1949) site on eastern Vancouver Island and subject of C flux and budget studies since 1998, was clearcut in winter 2011, residues piled in spring and burned in fall. Prior to harvest, the site was divided into 4 blocks to account for harvest plans and ecosite conditions. Total harvested wood volume was scaled for each block. Residue pile wood volume was determined by a standard Waste and Residue Survey (WRS) using field estimates of pile base area and plot density (wood volume / 0.005 ha plot) on 2 piles per block, by a smoke emissions geometric method with pile volumes estimated as ellipsoidal paraboloids and packing ratios (wood volume / pile volume) for 2 piles per block, as well as by five other GIS methods using pile volumes and areas from LiDAR and orthophotography flown August 2011, a LiDAR derived digital elevation model (DEM) from 2008, and total scaled wood volumes of 8 sample piles disassembled November 2011. A weak but significant negative relationship was found between pile packing ratio and pile volume. Block level avoidable+unavoidable residue pile wood volumes from the WRS method (20.0 m3 ha-1 SE 2.8) were 30%-50% of the geometric (69.0 m3 ha-1 SE 18.0) or five GIS/LiDAR (48.0 to 65.7 m3 ha-1 ) methods. Block volumes using the 2008 LiDAR DEM (unshifted 48.0 m3 ha-1 SE 3.9, shifted 53.6 m3 ha-1 SE 4.2) to account for pre-existing humps or hollows beneath piles were not different from those using the 2011 LiDAR DEM (50.3 m3 ha-1 SE 4.0). The block volume ratio
Hinaman, Kurt
2005-01-01
The Powder River Basin in Wyoming and Montana is an important source of energy resources for the United States. Coalbed methane gas is contained in Tertiary and upper Cretaceous hydrogeologic units in the Powder River Basin. This gas is released when water pressure in coalbeds is lowered, usually by pumping ground water. Issues related to disposal and uses of by-product water from coalbed methane production have developed, in part, due to uncertainties in hydrologic properties. One hydrologic property of primary interest is the amount of water contained in Tertiary and upper Cretaceous hydrogeologic units in the Powder River Basin. The U.S. Geological Survey, in cooperation with the Bureau of Land Management, conducted a study to describe the hydrogeologic framework and to estimate ground-water volumes in different facies of Tertiary and upper Cretaceous hydrogeologic units in the Powder River Basin in Wyoming. A geographic information system was used to compile and utilize hydrogeologic maps, to describe the hydrogeologic framework, and to estimate the volume of ground water in Tertiary and upper Cretaceous hydrogeologic units in the Powder River structural basin in Wyoming. Maps of the altitudes of potentiometric surfaces, altitudes of the tops and bottoms of hydrogeologic units, thicknesses of hydrogeologic units, percent sand of hydrogeologic units, and outcrop boundaries for the following hydrogeologic units were used: Tongue River-Wasatch aquifer, Lebo confining unit, Tullock aquifer, Upper Hell Creek confining unit, and the Fox Hills-Lower Hell Creek aquifer. Literature porosity values of 30 percent for sand and 35 percent for non-sand facies were used to calculate the volume of total ground water in each hydrogeologic unit. Literature specific yield values of 26 percent for sand and 10 percent for non-sand facies, and literature specific storage values of 0.0001 ft-1 (1/foot) for sand facies and 0.00001 ft-1 for non-sand facies, were used to calculate a
Richards, Joseph M.; Green, W. Reed
2013-01-01
Millwood Lake, in southwestern Arkansas, was constructed and is operated by the U.S. Army Corps of Engineers (USACE) for flood-risk reduction, water supply, and recreation. The lake was completed in 1966 and it is likely that with time sedimentation has resulted in the reduction of storage capacity of the lake. The loss of storage capacity can cause less water to be available for water supply, and lessens the ability of the lake to mitigate flooding. Excessive sediment accumulation also can cause a reduction in aquatic habitat in some areas of the lake. Although many lakes operated by the USACE have periodic bathymetric and sediment surveys, none have been completed for Millwood Lake. In March 2013, the U.S. Geological Survey (USGS), in cooperation with the USACE, surveyed the bathymetry of Millwood Lake to prepare an updated bathymetric map and area/capacity table. The USGS also collected sediment thickness data in June 2013 to estimate the volume of sediment accumulated in the lake.
NASA Astrophysics Data System (ADS)
Shamsalsadati, Sharmin; Weiss, Chester J.
2012-09-01
From a theoretical perspective, perfect Green's function recovery in diffusive systems is based on cross-correlation of time-series measured at distinct locations arising from background fluctuations from an infinite set of uncorrelated sources, either naturally occurring or engineered. Clearly such a situation is impossible in practice, and a relevant question to ask, then, is how does an imperfect set of noise sources affect the quality of the resulting empirical Green's function (EGF)? We narrow down this broad question by exploring the effect of source location and make no distinction between whether the noise sources are natural or man made. Following the theory of EGF recovery, the only requirement is that the sources are uncorrelated and endowed with the same (or nearly so) frequency spectrum and amplitude. As such, our intuition suggests that noise sources proximal to the observation points are likely to contribute more to the Green's function estimate than distal ones. However, in what manner and over what spatial extent our intuition is less clear. Thus, in this short note we specifically ask the question, 'Where are the noise sources that contribute most to the Green's function estimate in heterogeneous, lossy systems?' We call such a region the volume of relevance (VoR). Our analysis builds upon recent work on 1-D homogeneous systems by examining the effect of heterogeneity, dimensionality and receiver location in both one and two dimensions. Following the strategy of previous work in the field, the analysis is conducted out of mathematical convenience in the frequency domain although we stress that the sources need not be monochromatic. We find that for receivers located symmetrically across an interface between regions of contrasting diffusivity, the VoR rapidly shifts from one side of the interface to the other, and back again, as receiver separation increases. For the case where the receiver pair is located on the interface itself, the shifting is
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Huizinga, Richard J.
2014-01-01
The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.