NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
Accuracy and Precision of USNO GPS Carrier-Phase Time Transfer
2010-01-01
values. Comparison measures used include estimates obtained from two-way satellite time/frequency transfer ( TWSTFT ), and GPS-based estimates obtained...the IGS are used as a benchmark in the computation. Frequency values have a few times 10 -15 fractional frequency uncertainty. TWSTFT values confirm...obtained from two-way satellite time/frequency transfer ( TWSTFT ), BIPM Circular T, and the International GNSS Service (IGS). At present, it is known that
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogorelov, A. A.; Suslov, I. M.
2008-06-15
New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less
Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio
2017-01-10
The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Comparison of cumulant expansion and q-space imaging estimates for diffusional kurtosis in brain.
Mohanty, Vaibhav; McKinnon, Emilie T; Helpern, Joseph A; Jensen, Jens H
2018-05-01
To compare estimates for the diffusional kurtosis in brain as obtained from a cumulant expansion (CE) of the diffusion MRI (dMRI) signal and from q-space (QS) imaging. For the CE estimates of the kurtosis, the CE was truncated to quadratic order in the b-value and fit to the dMRI signal for b-values from 0 up to 2000s/mm 2 . For the QS estimates, b-values ranging from 0 up to 10,000s/mm 2 were used to determine the diffusion displacement probability density function (dPDF) via Stejskal's formula. The kurtosis was then calculated directly from the second and fourth order moments of the dPDF. These two approximations were studied for in vivo human data obtained on a 3T MRI scanner using three orthogonal diffusion encoding directions. The whole brain mean values for the CE and QS kurtosis estimates differed by 16% or less in each of the considered diffusion encoding directions, and the Pearson correlation coefficients all exceeded 0.85. Nonetheless, there were large discrepancies in many voxels, particularly those with either very high or very low kurtoses relative to the mean values. Estimates of the diffusional kurtosis in brain obtained using CE and QS approximations are strongly correlated, suggesting that they encode similar information. However, for the choice of b-values employed here, there may be substantial differences, depending on the properties of the diffusion microenvironment in each voxel. Copyright © 2018 Elsevier Inc. All rights reserved.
7 CFR 765.353 - Determining market value.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Determining market value. 765.353 Section 765.353... Determining market value. (a) Security proposed for disposition. (1) The Agency will obtain an appraisal of... estimated value is less than $25,000. (b) Security remaining after disposition. The Agency will obtain an...
43 CFR 11.84 - Damage determination phase-implementation guidance.
Code of Federal Regulations, 2012 CFR
2012-10-01
... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...
43 CFR 11.84 - Damage determination phase-implementation guidance.
Code of Federal Regulations, 2013 CFR
2013-10-01
... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...
43 CFR 11.84 - Damage determination phase-implementation guidance.
Code of Federal Regulations, 2014 CFR
2014-10-01
... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...
Comparison of GPS receiver DCB estimation methods using a GPS network
NASA Astrophysics Data System (ADS)
Choi, Byung-Kyu; Park, Jong-Uk; Min Roh, Kyoung; Lee, Sang-Jeong
2013-07-01
Two approaches for receiver differential code biases (DCB) estimation using the GPS data obtained from the Korean GPS network (KGN) in South Korea are suggested: the relative and single (absolute) methods. The relative method uses a GPS network, while the single method determines DCBs from a single station only. Their performance was assessed by comparing the receiver DCB values obtained from the relative method with those estimated by the single method. The daily averaged receiver DCBs obtained from the two different approaches showed good agreement for 7 days. The root mean square (RMS) value of those differences is 0.83 nanoseconds (ns). The standard deviation of the receiver DCBs estimated by the relative method was smaller than that of the single method. From these results, it is clear that the relative method can obtain more stable receiver DCBs compared with the single method over a short-term period. Additionally, the comparison between the receiver DCBs obtained by the Korea Astronomy and Space Science Institute (KASI) and those of the IGS Global Ionosphere Maps (GIM) showed a good agreement at 0.3 ns. As the accuracy of DCB values significantly affects the accuracy of ionospheric total electron content (TEC), more studies are needed to ensure the reliability and stability of the estimated receiver DCBs.
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Time and temperature dependent modulus of pyrrone and polyimide moldings
NASA Technical Reports Server (NTRS)
Lander, L. L.
1972-01-01
A method is presented by which the modulus obtained from a stress relaxation test can be used to estimate the modulus which would be obtained from a sonic vibration test. The method was applied to stress relaxation, sonic vibration, and high speed stress-strain data which was obtained on a flexible epoxy. The modulus as measured by the three test methods was identical for identical test times, and a change of test temperature was equivalent to a shift in the logarithmic time scale. An estimate was then made of the dynamic modulus of moldings of two Pyrrones and two polyimides, using stress relaxation data and the method of analysis which was developed for the epoxy. Over the common temperature range (350 to 500 K) in which data from both types of tests were available, the estimated dynamic modulus value differed by only a few percent from the measured value. As a result, it is concluded that, over the 500 to 700 K temperature range, the estimated dynamic modulus values are accurate.
stochastic estimation of transmissivity fields conditioned to flow connectivity data
NASA Astrophysics Data System (ADS)
Freixas, Genis; Fernàndez-Garcia, Daniel; Sanchez-vila, Xavier
2017-04-01
Most methods for hydraulic parameter interpretation rely on a number of simplifications regarding the homogeneity of the underlying porous media. This way, the actual heterogeneity of any natural parameter, such as transmissivity, is transferred to the estimated in a way heavily dependent on the interpretation method used. An example is a pumping test, in most cases interpreted by means of the Cooper-Jacob method, which implicitly assumes a homogeneous isotropic confined aquifer. It was shown that the estimates obtained from this method when applied to a real site are not local values, but still have a physical meaning; the estimated transmissivity is equal to the effective transmissivity characteristic of the regional scale, while the log-ratio of the estimated storage coefficient with respect to the actual real value (assumed constant), indicated by , is an indicator of flow connectivity, representative of the scale given by the distance between the pumping and the observation wells. In this work we propose a methodology to use together with actual measurements of the log transmissivity at selected points to obtain a map of the best local transmissivity estimates using cokriging. Since the interpolation involves two variables measured at different support scales, a critical point is the estimation of the covariance and crosscovariance matrices, involving some quadratures that are obtained using some simplified approach. The method was applied to a synthetic field displaying statistical anisotropy, showing that the use of connectivity indicators mixed with the local values provide a better representation of the local value map, in particular regarding the enhanced representation of the continuity of structures corresponding to either high or low values.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
NASA Astrophysics Data System (ADS)
Sukono; Lesmana, E.; Susanti, D.; Napitupulu, H.; Hidayat, Y.
2017-11-01
Value-at-Risk has already become a standard measurement that must be carried out by the financial institution for both internal interest and regulatory. In this paper, the estimation of Value-at-Risk of some stocks with econometric models approach is analyzed. In this research, we assume that the stock return follows the time series model. To do the estimation of mean value we are using ARMA models, while to estimate the variance value we are using FIGARCH models. Furthermore, the mean value estimator and the variance are used to estimate the Value-at-Risk. The result of the analysis shows that from five stock PRUF, BBRI, MPPA, BMRI, and INDF, the Value-at-Risk obtained are 0.01791, 0.06037, 0.02550, 0.06030, and 0.02585 respectively. Since Value-at-Risk represents the maximum risk size of each stock at a 95% level of significance, then it can be taken into consideration in determining the investment policy on stocks.
Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing
2015-01-01
In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri–Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method. PMID:26569241
Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing
2015-11-10
In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri-Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
do AMARAL, Regiane Cristina; SCABAR, Luiz Felipe; SLATER, Betzabeth; FRAZÃO, Paulo
2014-01-01
Objective To compare estimates of food behavior related to oral health obtained through a self-report measure and 24 hour dietary recalls (R24h). Method We applied three R24h and one self-report measure in 87 adolescents. The estimates for eleven food items were compared at individual and group levels. Results No significant differences in mean values were found for ice cream, vegetables and biscuits without filling. For the remaining items, the values reported by the adolescents were higher than the values estimated by R24h. The percentage of adolescents who reported intake frequency of 1 or more times/ day was higher than the value obtained through R24h for all food items except soft drinks. The highest values of crude agreement between the instruments, individually, were found in the biscuits without filling (75.9%) and ice cream (72.4%). Conclusion The results suggest that adolescents tend to report a degree of exposure to the food items larger than what they actually experience in their daily lives. PMID:25466475
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
E.G. McPherson
2007-01-01
Benefit-based tree valuation provides alternative estimates of the fair and reasonable value of trees while illustrating the relative contribution of different benefit types. This study compared estimates of tree value obtained using cost- and benefit-based approaches. The cost-based approach used the Council of Landscape and Tree Appraisers trunk formula method, and...
Alternative Strategies for Pricing Home Work Time.
ERIC Educational Resources Information Center
Zick, Cathleen D.; Bryant, W. Keith
1983-01-01
Discusses techniques for measuring the value of home work time. Estimates obtained using the reservation wage technique are contrasted with market alternative estimates derived with the same data set. Findings suggest that the market alternative cost method understates the true value of a woman's home time to the household. (JOW)
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.; Campbell, J. E.
1976-01-01
The heat resistance of Bacillus subtilis var. niger was measured from 85 to 125 C using moisture levels of % RH or = 0.001 to 100. Curves are presented which characterize thermal destruction using thermal death times defined as F values at a given combination of three moisture and temperature conditions. The times required at 100 C for reductions of 99.99% of the initial population were estimated for the three moisture conditions. The linear model (from which estimates of D are obtained) was satisfactory for estimating thermal death times (% RH or = 0.07) in the plate count range. Estimates based on observed thermal death times and D values for % RH = 100 diverged so that D values generally gave a more conservative estimate over the temperature range 90 to 125 C. Estimates of Z sub F and Z sub L ranged from 32.1 to 58.3 C for % RH of or = 0.07 and 100. A Z sub D = 30.0 was obtained for data observed at % RH or = 0.07.
A method for calibrating pH meters using standard solutions with low electrical conductivity
NASA Astrophysics Data System (ADS)
Rodionov, A. K.
2011-07-01
A procedure for obtaining standard solutions with low electrical conductivity that reproduce pH values both in acid and alkali regions is proposed. Estimates of the maximal possible error of reproducing the pH values of these solutions are obtained.
Inverse gas chromatographic determination of solubility parameters of excipients.
Adamska, Katarzyna; Voelkel, Adam
2005-11-04
The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.
A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
Sonawane, A U; Shirva, V K; Pradhan, A S
2010-02-01
Skin entrance doses (SEDs) were estimated by carrying out measurements of air kerma from 101 X-ray machines installed in 45 major and selected hospitals in the country by using a silicon detector-based dose Test-O-Meter. 1209 number of air kerma measurements of diagnostic projections for adults have been analysed for seven types of common diagnostic examinations, viz. chest (AP, PA, LAT), lumbar spine (AP, LAT), thoracic spine (AP, LAT), abdomen (AP), pelvis (AP), hip joints (AP) and skull (PA, LAT) for different film-screen combinations. The values of estimated diagnostic reference levels (DRLs) (third quartile values of SEDs) were compared with guidance levels/DRLs of doses published by the IAEA-BSS-Safety Series No. 115, 1996; HPA (NRPB) (2000 and 2005), UK; CRCPD/CDRH (USA), European Commission and other national values. The values of DRLs obtained in this study are comparable with the values published by the IAEA-BSS-115 (1996); HPA (NRPB) (2000 and 2005) UK; EC and CRCPD/CDRH, USA including values obtained in previous studies in India.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Estimating the Value of Life, Injury, and Travel Time Saved Using a Stated Preference Framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2016-06-01
The incidence of fatality over the period 2010-2014 from automobile accidents in North Cyprus is 2.75 times greater than the average for the EU. With the prospect of North Cyprus entering the EU, many investments will need to be undertaken to improve road safety in order to reach EU benchmarks. The objective of this study is to provide local estimates of the value of a statistical life and injury along with the value of time savings. These are among the parameter values needed for the evaluation of the change in the expected incidence of automotive accidents and time savings brought about by such projects. In this study we conducted a stated choice experiment to identify the preferences and tradeoffs of automobile drivers in North Cyprus for improved travel times, travel costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers choose. These estimates were used to assess the individuals' willingness to pay (WTP) to avoid fatalities and injuries and to save travel time. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of injury (VI) prevented, and the value per hour of travel time saved. The estimates for the VSL range from €315,293 to €1,117,856 and the estimates of VI from € 5,603 to € 28,186. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
Debris flow risk mapping on medium scale and estimation of prospective economic losses
NASA Astrophysics Data System (ADS)
Blahut, Jan; Sterlacchini, Simone
2010-05-01
Delimitation of potential zones affected by debris flow hazard, mapping of areas at risk, and estimation of future economic damage provides important information for spatial planners and local administrators in all countries endangered by this type of phenomena. This study presents a medium scale (1:25 000 - 1: 50 000) analysis applied in the Consortium of Mountain Municipalities of Valtellina di Tirano (Italian Alps, Lombardy Region). In this area a debris flow hazard map was coupled with the information about the elements at risk to obtain monetary values of prospective damage. Two available hazard maps were obtained from GIS medium scale modelling. Probability estimations of debris flow occurrence were calculated using existing susceptibility maps and two sets of aerial images. Value to the elements at risk was assigned according to the official information on housing costs and land value from the Territorial Agency of Lombardy Region. In the first risk map vulnerability values were assumed to be 1. The second risk map uses three classes of vulnerability values qualitatively estimated according to the debris flow possible propagation. Risk curves summarizing the possible economic losses were calculated. Finally these maps of economic risk were compared to maps derived from qualitative evaluation of the values of the elements at risk.
Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm
NASA Astrophysics Data System (ADS)
Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd
2018-04-01
Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Damping factor estimation using spin wave attenuation in permalloy film
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manago, Takashi, E-mail: manago@fukuoka-u.ac.jp; Yamanoi, Kazuto; Kasai, Shinya
2015-05-07
Damping factor of a Permalloy (Py) thin film is estimated by using the magnetostatic spin wave propagation. The attenuation lengths are obtained by the dependence of the transmission intensity on the antenna distance, and decrease with increasing magnetic fields. The relationship between the attenuation length, damping factor, and external magnetic field is derived theoretically, and the damping factor was determined to be 0.0063 by fitting the magnetic field dependence of the attenuation length, using the derived equation. The obtained value is in good agreement with the general value of Py. Thus, this estimation method of the damping factor using spinmore » waves attenuation can be useful tool for ferromagnetic thin films.« less
Martínez-Martínez, Víctor; Baladrón, Carlos; Gomez-Gil, Jaime; Ruiz-Ruiz, Gonzalo; Navas-Gracia, Luis M; Aguiar, Javier M; Carro, Belén
2012-10-17
This paper presents a system based on an Artificial Neural Network (ANN) for estimating and predicting environmental variables related to tobacco drying processes. This system has been validated with temperature and relative humidity data obtained from a real tobacco dryer with a Wireless Sensor Network (WSN). A fitting ANN was used to estimate temperature and relative humidity in different locations inside the tobacco dryer and to predict them with different time horizons. An error under 2% can be achieved when estimating temperature as a function of temperature and relative humidity in other locations. Moreover, an error around 1.5 times lower than that obtained with an interpolation method can be achieved when predicting the temperature inside the tobacco mass as a function of its present and past values with time horizons over 150 minutes. These results show that the tobacco drying process can be improved taking into account the predicted future value of the monitored variables and the estimated actual value of other variables using a fitting ANN as proposed.
Martínez-Martínez, Víctor; Baladrón, Carlos; Gomez-Gil, Jaime; Ruiz-Ruiz, Gonzalo; Navas-Gracia, Luis M.; Aguiar, Javier M.; Carro, Belén
2012-01-01
This paper presents a system based on an Artificial Neural Network (ANN) for estimating and predicting environmental variables related to tobacco drying processes. This system has been validated with temperature and relative humidity data obtained from a real tobacco dryer with a Wireless Sensor Network (WSN). A fitting ANN was used to estimate temperature and relative humidity in different locations inside the tobacco dryer and to predict them with different time horizons. An error under 2% can be achieved when estimating temperature as a function of temperature and relative humidity in other locations. Moreover, an error around 1.5 times lower than that obtained with an interpolation method can be achieved when predicting the temperature inside the tobacco mass as a function of its present and past values with time horizons over 150 minutes. These results show that the tobacco drying process can be improved taking into account the predicted future value of the monitored variables and the estimated actual value of other variables using a fitting ANN as proposed. PMID:23202032
Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.
Borah, Deva K; Voelz, David G
2007-08-10
The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.
Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model.
Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak
2015-01-01
This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.
Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model
Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak
2015-01-01
This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels. PMID:26351652
Saatchi, Mahdi; McClure, Mathew C; McKay, Stephanie D; Rolf, Megan M; Kim, JaeWoo; Decker, Jared E; Taxis, Tasia M; Chapple, Richard H; Ramey, Holly R; Northcutt, Sally L; Bauck, Stewart; Woodward, Brent; Dekkers, Jack C M; Fernando, Rohan L; Schnabel, Robert D; Garrick, Dorian J; Taylor, Jeremy F
2011-11-28
Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy.
2011-01-01
Background Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Methods Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Results Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. Conclusions These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy. PMID:22122853
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
Missing-value estimation using linear and non-linear regression with Bayesian gene selection.
Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R
2003-11-22
Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).
Correlation dimension and phase space contraction via extreme value theory
NASA Astrophysics Data System (ADS)
Faranda, Davide; Vaienti, Sandro
2018-04-01
We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.
Estimating the value of life and injury for pedestrians using a stated preference framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2017-09-01
The incidence of pedestrian death over the period 2010 to 2014 per 1000,000 in North Cyprus is about 2.5 times that of the EU, with 10.5 times more pedestrian road injuries than deaths. With the prospect of North Cyprus entering the EU, many investments need to be undertaken to improve road safety in order to reach EU benchmarks. We conducted a stated choice experiment to identify the preferences and tradeoffs of pedestrians in North Cyprus for improved walking times, pedestrian costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers chose. These were used to estimate the individuals' willingness to pay (WTP) to save walking time and to avoid pedestrian fatalities and injuries. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of an injury (VI) prevented, and the value per hour of walking time saved. The estimate of the VSL was €699,434 and the estimate of VI was €20,077. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. The estimated value of time to pedestrians is €7.20 per person hour. The ratio of deaths to injuries is much higher for pedestrians than for road accidents, and this is completely consistent with the higher estimated WTP to avoid a pedestrian accident than to avoid a car accident. The value of time of €7.20 is quite high relative to the wages earned. Findings provide a set of information on the VRR for fatalities and injuries and the value of pedestrian time that is critical for conducing ex ante appraisals of investments to improve pedestrian safety. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna
2014-09-15
Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less
Objectivity and validity of EMG method in estimating anaerobic threshold.
Kang, S-K; Kim, J; Kwon, M; Eom, H
2014-08-01
The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.
Energy potential of the modified excess sludge
NASA Astrophysics Data System (ADS)
Zawieja, Iwona
2017-11-01
On the basis of the SCOD value of excess sludge it is possible to estimate an amount of energy potentially obtained during the methane fermentation process. Based on a literature review, it has been estimated that from 1 kg of SCOD it is possible to obtain 3.48 kWh of energy. Taking into account the above methane and energy ratio (i.e. 10 kWh/1Nm3 CH4), it is possible to determine the volume of methane obtained from the tested sludge. Determination of potential energy of sludge is necessary for the use of biogas as a source of power generators as cogeneration and ensure the stability of this type of system. Therefore, the aim of the study was to determine the energy potential of excess sludge subjected to the thermal and chemical disintegration. In the case of thermal disintegration, test was conducted in the low temperature 80°C. The reagent used for the chemical modification was a peracetic acid, which in an aqueous medium having strong oxidizing properties. The time of chemical modification was 6 hours. Applied dose of the reagent was 1.0 ml CH3COOOH/L of sludge. By subjecting the sludge disintegration by the test methods achieved an increase in the SCOD value of modified sludge, indicating the improvement of biodegradability along with a concomitant increase in their energy potential. The obtained experimental production of biogas from disintegrated sludge confirmed that it is possible to estimate potential intensity of its production. The SCOD value of 2576 mg O2/L, in the case of chemical disintegration, was obtained for a dose of 1.0 ml CH3COOH/L. For this dose the pH value was equal 6.85. In the case of thermal disintegration maximum SCOD value was 2246 mg O2/L obtained at 80°C and the time of preparation 6 h. It was estimated that in case of thermal disintegration as well as for the chemical disintegration for selected parameters, the potential energy for model digester of active volume of 5L was, respectively, 0.193 and 0,118 kWh.
ERIC Educational Resources Information Center
Rutherford, W. J.; Diemer, Gary A.; Scott, Eric D.
2011-01-01
Bioelectrical impedance analysis (BIA) is a widely used method for estimating body composition, yet issues concerning its validity persist in the literature. The purpose of this study was to validate percentage of body fat (BF) values estimated from BIA and skinfold (SF) with those obtained from hydrodensitometry (HD). Percent BF values measured…
Amann, Rupert P; Chapman, Phillip L
2009-01-01
We retrospectively mined and modeled data to answer 3 questions. 1) Relative to an estimate based on approximately 20 semen samples, how imprecise is an estimate of an individual's total sperm per ejaculate (TSperm) based on 1 sample? 2) What is the impact of abstinence interval on TSperm and TSperm/h? 3) How many samples are needed to provide a meaningful estimate of an individual's mean TSperm or TSperm/h? Data were for 18-20 consecutive masturbation samples from each of 48 semen donors. Modeling exploited the gamma distribution of values for TSperm and a unique approach to project to future samples. Answers: 1) Within-individual coefficients of variation were similar for TSperm or TSperm/h abstinence and ranged from 17% to 51%; average approximately 34%. TSperm or TSperm/h in any individual sample from a given donor was between -20% and +20% of the mean value in 48% of 18-20 samples per individual. 2) For a majority of individuals, TSperm increased in a nearly linear manner through approximately 72 hours of abstinence. TSperm and TSperm/h after 18-36 hours' abstinence are high. To obtain meaningful values for diagnostic purposes and maximize distinction of individuals with relatively low or high sperm production, the requested abstinence should be 42-54 hours with an upper limit of 64 hours. For individuals producing few sperm, 7 days or more of abstinence might be appropriate to obtain sperm for insemination. 3) At least 3 samples from a hypothetical future subject are recommended for most applications. Assuming 60 hours' abstinence, 80% confidence limits for TSperm/h for 1, 3, or 6 samples would be 70%-163%, 80%-130%, or 85%-120% of the mean for observed values. In only approximately 50% of cases would TSperm/h for a single sample be within -16% and +30% of the true mean value for that subject. Pooling values for TSperm in samples obtained after 18-36 or 72-168 hours' abstinence with values for TSperm obtained after 42-64 hours is inappropriate. Reliance on TSperm for a single sample per subject is unwise.
An evaluation of a bioelectrical impedance analyser for the estimation of body fat content.
Maughan, R J
1993-01-01
Measurement of body composition is an important part of any assessment of health or fitness. Hydrostatic weighing is generally accepted as the most reliable method for the measurement of body fat content, but is inconvenient. Electrical impedance analysers have recently been proposed as an alternative to the measurement of skinfold thickness. Both these latter methods are convenient, but give values based on estimates obtained from population studies. This study compared values of body fat content obtained by hydrostatic weighing, skinfold thickness measurement and electrical impedance on 50 (28 women, 22 men) healthy volunteers. Mean(s.e.m.) values obtained by the three methods were: hydrostatic weighing, 20.5(1.2)%; skinfold thickness, 21.8(1.0)%; impedance, 20.8(0.9)%. The results indicate that the correlation between the skinfold method and hydrostatic weighing (0.931) is somewhat higher than that between the impedance method and hydrostatic weighing (0.830). This is, perhaps, not surprising given the fact that the impedance method is based on an estimate of total body water which is then used to calculate body fat content. The skinfold method gives an estimate of body density, and the assumptions involved in the conversion from body density to body fat content are the same for both methods. PMID:8457817
Bonetto, Rita Dominga; Ladaga, Juan Luis; Ponz, Ezequiel
2006-04-01
Scanning electron microscopy (SEM) is widely used in surface studies and continuous efforts are carried out in the search of estimators of different surface characteristics. By using the variogram, we developed two of these estimators that were used to characterize the surface roughness from the SEM image texture. One of the estimators is related to the crossover between fractal region at low scale and the periodic region at high scale, whereas the other estimator characterizes the periodic region. In this work, a full study of these estimators and the fractal dimension in two dimensions (2D) and three dimensions (3D) was carried out for emery papers. We show that the obtained fractal dimension with only one image is good enough to characterize the roughness surface because its behavior is similar to those obtained with 3D height data. We show also that the estimator that indicates the crossover is related to the minimum cell size in 2D and to the average particle size in 3D. The other estimator has different values for the three studied emery papers in 2D but it does not have a clear meaning, and these values are similar for those studied samples in 3D. Nevertheless, it indicates the formation tendency of compound cells. The fractal dimension values from the variogram and from an area versus step log-log graph were studied with 3D data. Both methods yield different values corresponding to different information from the samples.
Energy and maximum norm estimates for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Olsson, Pelle; Oliger, Joseph
1994-01-01
We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.
NASA Astrophysics Data System (ADS)
Carcione, José M.; Gei, Davide
2004-05-01
We estimate the concentration of gas hydrate at the Mallik 2L-38 research site using P- and S-wave velocities obtained from well logging and vertical seismic profiles (VSP). The theoretical velocities are obtained from a generalization of Gassmann's modulus to three phases (rock frame, gas hydrate and fluid). The dry-rock moduli are estimated from the log profiles, in sections where the rock is assumed to be fully saturated with water. We obtain hydrate concentrations up to 75%, average values of 37% and 21% from the VSP P- and S-wave velocities, respectively, and 60% and 57% from the sonic-log P- and S-wave velocities, respectively. The above averages are similar to estimations obtained from hydrate dissociation modeling and Archie methods. The estimations based on the P-wave velocities are more reliable than those based on the S-wave velocities.
NASA Astrophysics Data System (ADS)
Dupuis, Hélène; Weill, Alain; Katsaros, Kristina; Taylor, Peter K.
1995-10-01
Heat flux estimates obtained using the inertial dissipation method, and the profile method applied to radiosonde soundings, are assessed with emphasis on the parameterization of the roughness lengths for temperature and specific humidity. Results from the inertial dissipation method show a decrease of the temperature and humidity roughness lengths for increasing neutral wind speed, in agreement with previous studies. The sensible heat flux estimates were obtained using the temperature estimated from the speed of sound determined by a sonic anemometer. This method seems very attractive for estimating heat fluxes over the ocean. However allowance must be made in the inertial dissipation method for non-neutral stratification. The SOFIA/ASTEX and SEMAPHORE results show that, in unstable stratification, a term due to the transport terms in the turbulent kinetic energy budget, has to be included in order to determine the friction velocity with better accuracy. Using the profile method with radiosonde data, the roughness length values showed large scatter. A reliable estimate of the temperature roughness length could not be obtained. The humidity roughness length values were compatible with those found using the inertial dissipation method.
NASA Astrophysics Data System (ADS)
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca
2018-05-08
Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Ultrasonic tracking of shear waves using a particle filter.
Ingle, Atul N; Ma, Chi; Varghese, Tomy
2015-11-01
This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.
NASA Technical Reports Server (NTRS)
Thadani, S. G.
1977-01-01
The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.
Phase Retrieval on Undersampled Data from the Thermal Infrared Sensor (TIRS)
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Mentzell, Eric
2011-01-01
Phase retrieval was applied to under-sampled data from a thermal infrared imaging system to estimate defocus across the field of view (FOV). We compare phase retrieval estimated values to those obtained using an independent technique.
The estimation of probable maximum precipitation: the case of Catalonia.
Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel
2008-12-01
A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.
Stable boundary conditions and difference schemes for Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Dutt, P.
1985-01-01
The Navier-Stokes equations can be viewed as an incompletely elliptic perturbation of the Euler equations. By using the entropy function for the Euler equations as a measure of energy for the Navier-Stokes equations, it was possible to obtain nonlinear energy estimates for the mixed initial boundary value problem. These estimates are used to derive boundary conditions which guarantee L2 boundedness even when the Reynolds number tends to infinity. Finally, a new difference scheme for modelling the Navier-Stokes equations in multidimensions for which it is possible to obtain discrete energy estimates exactly analogous to those we obtained for the differential equation was proposed.
NASA Astrophysics Data System (ADS)
Lugo, J. M.; Oliva, A. I.
2017-02-01
The thermal effusivity of gold, aluminum, and copper thin films of nanometric thickness (20 nm to 200 nm) was investigated in terms of the films' thickness. The metallic thin films were deposited onto glass substrates by thermal evaporation, and the thermal effusivity was estimated by using experimental parameters such as the specific heat, thermal conductivity, and thermal diffusivity values obtained at room conditions. The specific heat, thermal conductivity, and thermal diffusivity values of the metallic thin films are determined with a methodology based on the behavior of the thermal profiles of the films when electrical pulses of few microseconds are applied at room conditions. For all the investigated materials, the thermal effusivity decreases with decreased thickness. The thermal effusivity values estimated by the presented methodology are consistent with other reported values obtained under vacuum conditions and more elaborated methodologies.
Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori
2018-01-01
We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0-2500 s/mm2 with one number of excitations [NEX]) and five b-values (0-2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions.
Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori
2018-01-01
We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0–2500 s/mm2 with one number of excitations [NEX]) and five b-values (0–2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions. PMID:29494639
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
Zozaya, Néboa; Martínez-Galdeano, Lucía; Alcalá, Bleric; Armario-Hita, Jose Carlos; Carmona, Concepción; Carrascosa, Jose Manuel; Herranz, Pedro; Lamas, María Jesús; Trapero-Bertran, Marta; Hidalgo-Vega, Álvaro
2018-06-01
Multi-criteria decision analysis (MCDA) is a tool that systematically considers multiple factors relevant to health decision-making. The aim of this study was to use an MCDA to assess the value of dupilumab for severe atopic dermatitis compared with secukinumab for moderate to severe plaque psoriasis in Spain. Following the EVIDEM (Evidence and Value: Impact on DEcision Making) methodology, the estimated value of both interventions was obtained by means of an additive linear model that combined the individual weighting (between 1 and 5) of each criterion with the individual scoring of each intervention in each criterion. Dupilumab was evaluated against placebo, while secukinumab was evaluated against placebo, etanercept and ustekinumab. A retest was performed to assess the reproducibility of weights, scores and value estimates. The overall MCDA value estimate for dupilumab versus placebo was 0.51 ± 0.14. This value was higher than those obtained for secukinumab: 0.48 ± 0.15 versus placebo, 0.45 ± 0.15 versus etanercept and 0.39 ± 0.18 versus ustekinumab. The highest-value contribution was reported by the patients' group, followed by the clinical professionals and the decision makers. A fundamental element that explained the difference in the scoring between pathologies was the availability of therapeutic alternatives. The retest confirmed the consistency and replicability of the analysis. Under this methodology, and assuming similar economic costs per patient for both treatments, the results indicated that the overall value estimated of dupilumab for severe atopic dermatitis was similar to, or slightly higher than, that of secukinumab for moderate to severe plaque psoriasis.
View Estimation Based on Value System
NASA Astrophysics Data System (ADS)
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
Varrone, Andrea; Gulyás, Balázs; Takano, Akihiro; Stabin, Michael G; Jonsson, Cathrine; Halldin, Christer
2012-02-01
[(18)F]FE-PE2I is a promising dopamine transporter (DAT) radioligand. In nonhuman primates, we examined the accuracy of simplified quantification methods and the estimates of radiation dose of [(18)F]FE-PE2I. In the quantification study, binding potential (BP(ND)) values previously reported in three rhesus monkeys using kinetic and graphical analyses of [(18)F]FE-PE2I were used for comparison. BP(ND) using the cerebellum as reference region was obtained with four reference tissue methods applied to the [(18)F]FE-PE2I data that were compared with the kinetic and graphical analyses. In the whole-body study, estimates of adsorbed radiation were obtained in two cynomolgus monkeys. All reference tissue methods provided BP(ND) values within 5% of the values obtained with the kinetic and graphical analyses. The shortest imaging time for stable BP(ND) estimation was 54 min. The average effective dose of [(18)F]FE-PE2I was 0.021 mSv/MBq, similar to 2-deoxy-2-[(18)F]fluoro-d-glucose. The results in nonhuman primates suggest that [(18)F]FE-PE2I is suitable for accurate and stable DAT quantification, and its radiation dose estimates would allow for a maximal administered radioactivity of 476 MBq in human subjects. Copyright © 2012 Elsevier Inc. All rights reserved.
Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto
2018-03-29
Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user's experience and sensibility. The purpose of this study was to quantify the impact of user's subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t -test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies.
Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto
2018-01-01
Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user’s experience and sensibility. The purpose of this study was to quantify the impact of user’s subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t-test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies. PMID:29596376
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Measurement of Blood Pressure Using an Arterial Pulsimeter Equipped with a Hall Device
Lee, Sang-Suk; Nam, Dong-Hyun; Hong, You-Sik; Lee, Woo-Beom; Son, Il-Ho; Kim, Keun-Ho; Choi, Jong-Gu
2011-01-01
To measure precise blood pressure (BP) and pulse rate without using a cuff, we have developed an arterial pulsimeter consisting of a small, portable apparatus incorporating a Hall device. Regression analysis of the pulse wave measured during testing of the arterial pulsimeter was conducted using two equations of the BP algorithm. The estimated values of BP obtained by the cuffless arterial pulsimeter over 5 s were compared with values obtained using electronic or liquid mercury BP meters. The standard deviation between the estimated values and the measured values for systolic and diastolic BP were 8.3 and 4.9, respectively, which are close to the range of values of the BP International Standard. Detailed analysis of the pulse wave measured by the cuffless radial artery pulsimeter by detecting changes in the magnetic field can be used to develop a new diagnostic algorithm for BP, which can be applied to new medical apparatus such as the radial artery pulsimeter. PMID:22319381
Structure of the Large Magellanic Cloud from near infrared magnitudes of red clump stars
NASA Astrophysics Data System (ADS)
Subramanian, S.; Subramaniam, A.
2013-04-01
Context. The structural parameters of the disk of the Large Magellanic Cloud (LMC) are estimated. Aims: We used the JH photometric data of red clump (RC) stars from the Magellanic Cloud Point Source Catalog (MCPSC) obtained from the InfraRed Survey Facility (IRSF) to estimate the structural parameters of the LMC disk, such as the inclination, i, and the position angle of the line of nodes (PAlon), φ. Methods: The observed LMC region is divided into several sub-regions, and stars in each region are cross-identified with the optically identified RC stars to obtain the near infrared magnitudes. The peak values of H magnitude and (J - H) colour of the observed RC distribution are obtained by fitting a profile to the distributions and by taking the average value of magnitude and colour of the RC stars in the bin with largest number. Then the dereddened peak H0 magnitude of the RC stars in each sub-region is obtained from the peak values of H magnitude and (J - H) colour of the observed RC distribution. The right ascension (RA), declination (Dec), and relative distance from the centre of each sub-region are converted into x,y, and z Cartesian coordinates. A weighted least square plane fitting method is applied to this x,y,z data to estimate the structural parameters of the LMC disk. Results: An intrinsic (J - H)0 colour of 0.40 ± 0.03 mag in the Simultaneous three-colour InfraRed Imager for Unbiased Survey (SIRIUS) IRSF filter system is estimated for the RC stars in the LMC and a reddening map based on (J - H) colour of the RC stars is presented. When the peaks of the RC distribution were identified by averaging, an inclination of 25°.7 ± 1°.6 and a PAlon = 141°.5 ± 4°.5 were obtained. We estimate a distance modulus, μ = 18.47 ± 0.1 mag to the LMC. Extra-planar features which are both in front and behind the fitted plane are identified. They match with the optically identified extra-planar features. The bar of the LMC is found to be part of the disk within 500 pc. Conclusions: The estimates of the structural parameters are found to be independent of the photometric bands used for the analysis. The radial variation of the structural parameters are also studied. We find that the inner disk, within ~3°.0, is less inclined and has a larger value of PAlon when compared to the outer disk. Our estimates are compared with the literature values, and the possible reasons for the small discrepancies found are discussed.
NASA Astrophysics Data System (ADS)
Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime
2018-04-01
The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.
Theoretical and observational assessments of flare efficiencies.
Leahey, D M; Preston, K; Strosher, M
2001-12-01
Flaring of waste gases is a common practice in the processing of hydrocarbon (HC) materials. It is assumed that flaring achieves complete combustion with relatively innocuous byproducts such as CO2 and H2O. However, flaring is rarely successful in the attainment of complete combustion, because entrainment of air into the region of combusting gases restricts flame sizes to less than optimum values. The resulting flames are too small to dissipate the amount of heat associated with 100% combustion efficiency. Equations were employed to estimate flame lengths, areas, and volumes as functions of flare stack exit velocity, stoichiometric mixing ratio, and wind speed. Heats released as part of the combustion process were then estimated from a knowledge of the flame dimensions together with an assumed flame temperature of 1200 K. Combustion efficiencies were subsequently obtained by taking the ratio of estimated actual heat release values to those associated with 100% complete combustion. Results of the calculations showed that combustion efficiencies decreased rapidly as wind speed increased from 1 to 6 m/sec. As wind speeds increased beyond 6 m/sec, combustion efficiencies tended to level off at values between 10 and 15%. Propane and ethane tend to burn more efficiently than do methane or hydrogen sulfide because of their lower stoichiometric mixing ratios. Results of theoretical predictions were compared to nine values of local combustion efficiencies obtained as part of an observational study into flaring activity conducted by the Alberta Research Council (ARC). All values were obtained during wind speed conditions of less than 4 m/sec. There was generally good agreement between predicted and observed values. The mean and standard deviation of observed combustion efficiencies were 68 +/- 7%. Comparable predicted values were 69 +/- 7%.
The Calibration of the Corneal Light Reflex to Estimate the Degree of an Angle of Deviation.
Tengtrisorn, Supaporn; Tangkijwongpaisarn, Sitthi; Burachokvivat, Somporn
2015-12-01
To measure the conversion factor for the size of an angle of deviation from the clinical photographs of the corneal light reflex. In this cross-sectional study, 19 normal subjects with 20/20 visual acuity were photographed with a digital camera while staring at targets placed five prism diopters (PD) apart from one another on a screen. The subjects were tested at a distance of 1 meter (m) and 4 m from a screen. Measurement of the corneal light reflex displacement for each fixed target was obtained from the photographs. The calibration of the corneal light reflex displacement in millimeters (mm) against the angle of deviation in PD was then analyzed with repeated measure linear regression analysis. At 1 m, the values of 0.047 mm/PD and 0.058 mm/PD were obtained as the conversion factor from reflex displacement to deviated angle for the nasal side and temporal side respectively. At 4 m, the values were 0.050 mm/PD and 0.064 mm/PD for the nasal side and the temporal side respectively. There were significant differences between the values obtained at the different distances, regardless of nasal or temporal side. Conversion factors were presented for estimating the strabismic angle at different distances and gazes. For clinical practice, the use of photographs to estimate the strabismic angle should use different values for different distances and strabismic types.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
Puente, Gabriela F; Bonetto, Fabián J
2005-05-01
We used the temporal evolution of the bubble radius in single-bubble sonoluminescence to estimate the water liquid-vapor accommodation coefficient. The rapid changes in the bubble radius that occur during the bubble collapse and rebounds are a function of the actual value of the accommodation coefficient. We selected bubble radius measurements obtained from two different experimental techniques in conjunction with a robust parameter estimation strategy and we obtained that for water at room temperature the mass accommodation coefficient is in the confidence interval [0.217,0.329].
A cross-country Exchange Market Pressure (EMP) dataset.
Desai, Mohit; Patnaik, Ila; Felman, Joshua; Shah, Ajay
2017-06-01
The data presented in this article are related to the research article titled - "An exchange market pressure measure for cross country analysis" (Patnaik et al. [1]). In this article, we present the dataset for Exchange Market Pressure values (EMP) for 139 countries along with their conversion factors, ρ (rho). Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values) for the point estimates of ρ 's. Using the standard errors of estimates of ρ 's, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.
Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A
2016-10-26
Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.
Ultrasonic tracking of shear waves using a particle filter
Ingle, Atul N.; Ma, Chi; Varghese, Tomy
2015-01-01
Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761
Replica and extreme-value analysis of the Jarzynski free-energy estimator
NASA Astrophysics Data System (ADS)
Palassini, Matteo; Ritort, Felix
2008-03-01
We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.
Worldwide Ocean Optics Database (WOOD)
2001-09-30
user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated
The Least-Squares Estimation of Latent Trait Variables.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi
This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…
Zhu, Hui; Yang, Ri-Fang; Yun, Liu-Hong; Jiang, Yu; Li, Jin
2009-09-01
This paper is to establish a reversed-phase ion-pair chromatography (RP-IPC) method for universal estimation of the octanol/water partition coefficients (logP) of a wide range of structurally diverse compounds including acidic, basic, neutral and amphoteric species. The retention factors corresponding to 100% water (logk(w)) were derived from the linear part of the logk'/phi relationship, using at least four isocratic logk' values containing different organic compositions. The logk(w) parameters obtained were close to the corresponding logP values obtained with the standard "shake flask" methods. The mean deviation for test drugs is 0.31. RP-IPC with trifluoroacetic acid as non classic ion-pair agents can be applicable to determine the logP values for a variety of drug-like molecules with increased accuracy.
Tan, Yanliang; Ishikawa, Tetsuo; Janik, Miroslaw; Tokonami, Shinji; Hosoda, Masahiro; Sorimachi, Atsuyuki; Kearfott, Kimberlee
2015-12-01
The accident at the Fukushima Daiichi Nuclear Power Plant (FDNPP) in Japan resulted in significant releases of fission products. While substantial data exist concerning outdoor air radioactivity following the accident, the resulting indoor radioactivity remains pure speculation without a proper method for estimating the ratio of the indoor to outdoor airborne radioactivity, termed the airborne sheltering factor (ASF). Lacking a meaningful value of the ASF, it is difficult to assess the inhalation doses to residents and evacuees even when outdoor radionuclide concentrations are available. A simple model was developed and the key parameters needed to estimate the ASF were obtained through data fitting of selected indoor and outdoor airborne radioactivity measurement data obtained following the accident at a single location. Using the new model with values of the air exchange rate, interior air volume, and the inner surface area of the dwellings, the ASF can be estimated for a variety of dwelling types. Assessment of the inhalation dose to individuals readily follows from the value of the ASF, the person's indoor occupancy factor, and the measured outdoor radioactivity concentration. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Shemansky, D. E.; Hall, D. T.; Ajello, J. M.
1985-01-01
The cross sections sigma R 1 (2p) for excitation of H Ly-alpha emission produced by electron impact on H2 is reexamined. A more accurate estimate for sigma R 1 (2p) is obtained based on Born approximation estimates of the H2 Rydberg system cross sections using measured relative excitation functions. The obtained value is (8.18 + or -1.2) x 10 to the -18th sq cm at 100 eV, a factor of 0.69 below the value universally applied to cross section measurements over the past decade. Cross sections for the H2 Rydberg systems fixed in magnitude by the Born approximation have also been obtained using experimentally determined excitation functions. Accurate analytic expressions for these cross sections allow the direct calculation of rate coefficients.
Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method
NASA Astrophysics Data System (ADS)
Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.
2018-01-01
Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.
Migheli, Francesca; Stoccoro, Andrea; Coppedè, Fabio; Wan Omar, Wan Adnan; Failli, Alessandra; Consolini, Rita; Seccia, Massimo; Spisni, Roberto; Miccoli, Paolo; Mathers, John C.; Migliore, Lucia
2013-01-01
There is increasing interest in the development of cost-effective techniques for the quantification of DNA methylation biomarkers. We analyzed 90 samples of surgically resected colorectal cancer tissues for APC and CDKN2A promoter methylation using methylation sensitive-high resolution melting (MS-HRM) and pyrosequencing. MS-HRM is a less expensive technique compared with pyrosequencing but is usually more limited because it gives a range of methylation estimates rather than a single value. Here, we developed a method for deriving single estimates, rather than a range, of methylation using MS-HRM and compared the values obtained in this way with those obtained using the gold standard quantitative method of pyrosequencing. We derived an interpolation curve using standards of known methylated/unmethylated ratio (0%, 12.5%, 25%, 50%, 75%, and 100% of methylation) to obtain the best estimate of the extent of methylation for each of our samples. We observed similar profiles of methylation and a high correlation coefficient between the two techniques. Overall, our new approach allows MS-HRM to be used as a quantitative assay which provides results which are comparable with those obtained by pyrosequencing. PMID:23326336
NASA Astrophysics Data System (ADS)
Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya
2011-06-01
SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.
An Improved Shock Model for Bare and Covered Explosives
NASA Astrophysics Data System (ADS)
Scholtes, Gert; Bouma, Richard
2017-06-01
TNO developed a toolbox to estimate the probability of a violent event on a ship or other platform, when the munition bunker is hit by e.g. a bullet or fragment from a missile attack. To obtain the proper statistical output, several millions of calculations are needed to obtain a reliable estimate. Because millions of different scenarios have to be calculated, hydrocode calculations cannot be used for this type of application, but a fast and good engineering solutions is needed. At this moment the Haskins and Cook-model is used for this purpose. To obtain a better estimate for covered explosives and munitions, TNO has developed a new model which is a combination of the shock wave model at high pressure, as described by Haskins and Cook, in combination with the expanding shock wave model of Green. This combined model gives a better fit with the experimental values for explosives response calculations, using the same critical energy fluence values for covered as well as for bare explosives. In this paper the theory is explained and results of the calculations for several bare and covered explosives will be presented. To show this, the results will be compared with the experimental values from literature for composition B, Composition B-3 and PBX-9404.
Biological half-life of gaseous elemental iodine deposited onto rice grains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, S.; Muramatsu, Y.; Sumiya, M.
In order to obtain the biological half-life (Tb) of iodine deposited on rough rice grains, rice plants of four different growing stages--heading, milky, dough, and yellow ripe--were exposed to elemental gaseous iodine. After the exposure, the rough rice samples were collected at different intervals and analyzed for iodine to estimate the value of Tb. The average value of Tb obtained by the experiments at the dough and yellow ripe stages was about 200 d. This value is considerably larger than those for pasture grass and leafy vegetables.
Nebuya, Satoru; Koike, Tomotaka; Imai, Hiroshi; Iwashita, Yoshiaki; Brown, Brian H; Soma, Kazui
2015-06-01
This paper reports on the results of a study which compares lung density values obtained from electrical impedance tomography (EIT), clinical diagnosis and CT values (HU) within a region of interest in the lung. The purpose was to assess the clinical use of lung density estimation using EIT data. In 11 patients supported by a mechanical ventilator, the consistency of regional lung density measurements as estimated by EIT was validated to assess the feasibility of its use in intensive care medicine. There were significant differences in regional lung densities recorded in the supine position between normal lungs and diseased lungs associated with pneumonia, atelectasis and pleural effusion (normal; 240 ± 71.7 kg m(-3), pneumonia; 306 ± 38.6 kg m(-3), atelectasis; 497 ± 130 kg m(-3), pleural effusion; 467 ± 113 kg m(-3): Steel-Dwass test, p < 0.05). In addition, in order to compare lung density with CT image pixels, the image resolution of CT images, which was originally 512 × 512 pixels, was changed to 16 × 16 pixels to match that of the EIT images. The results of CT and EIT images from five patients in an intensive care unit showed a correlation coefficient of 0.66 ± 0.13 between the CT values (HU) and the lung density values (kg m(-3)) obtained from EIT. These results indicate that it may be possible to obtain a quantitative value for regional lung density using EIT.
NASA Astrophysics Data System (ADS)
Calla, O. P. N.; Mathur, Shubhra; Gadri, Kishan Lal; Jangid, Monika
2016-12-01
In the present paper, permittivity maps of equatorial lunar surface are generated using brightness temperature (TB) data obtained from Microwave Radiometer (MRM) of Chang'e-1 and physical temperature (TP) data obtained from Diviner of Lunar Reconnaissance Orbiter (LRO). Here, permittivity mapping is not carried out above 60° latitudes towards the lunar poles due to large anomaly in the physical temperature obtained from the Diviner. Microwave frequencies, which are used to generate these maps are 3 GHz, 7.8 GHz, 19.35 GHz and 37 GHz. Permittivity values are simulated using TB values at these four frequencies. Here, weighted average of physical temperature obtained from Diviner are used to compute permittivity at each microwave frequencies. Longer wavelengths of microwave signals give information of more deeper layers of the lunar surface as compared to smaller wavelength. Initially, microwave emissivity is estimated using TB values from MRM and physical temperature (TP) from Diviner. From estimated emissivity the real part of permittivity (ε), is calculated using Fresnel equations. The permittivity maps of equatorial lunar surface is generated. The simulated permittivity values are normalized with respect to density for easy comparison of simulated permittivity values with the permittivity values of Apollo samples as well as with the permittivity values of Terrestrial Analogue of Lunar Soil (TALS) JSC-1A. Lower value of dielectric constant (ε‧) indicates that the corresponding lunar surface is smooth and doesn't have rough rocky terrain. Thus a future lunar astronaut can use these data to decide proper landing site for future lunar missions. The results of this paper will serve as input to future exploration of lunar surface.
NASA Astrophysics Data System (ADS)
De Ridder, Maaike; De Haulleville, Thalès; Kearsley, Elizabeth; Van den Bulcke, Jan; Van Acker, Joris; Beeckman, Hans
2014-05-01
It is commonly acknowledged that allometric equations for aboveground biomass and carbon stock estimates are improved significantly if density is included as a variable. However, not much attention is given to this variable in terms of exact, measured values and density profiles from pith to bark. Most published case-studies obtain density values from literature sources or databases, this way using large ranges of density values and possible causing significant errors in carbon stock estimates. The use of one single fixed value for density is also not recommended if carbon stock increments are estimated. Therefore, our objective is to measure and analyze a large number of tree species occurring in two Biosphere Reserves (Luki and Yangambi). Nevertheless, the diversity of tree species in these tropical forests is too high to perform this kind of detailed analysis on all tree species (> 200/ha). Therefore, we focus on the most frequently encountered tree species with high abundance (trees/ha) and dominance (basal area/ha) for this study. Increment cores were scanned with a helical X-ray protocol to obtain density profiles from pith to bark. This way, we aim at dividing the tree species with a distinct type of density profile into separate groups. If, e.g., slopes in density values from pith to bark remain stable over larger samples of one tree species, this slope could also be used to correct for errors in carbon (increment) estimates, caused by density values from simplified density measurements or density values from literature. In summary, this is most likely the first study in the Congo Basin that focuses on density patterns in order to check their influence on carbon stocks and differences in carbon stocking based on species composition (density profiles ~ temperament of tree species).
Cost Estimating in the Department of Defense and Areas for Improvement
2010-09-01
in reducing the overall projected cost to the agency without impairing essential functions or characteristics, provided that it does not involve a...government can obtain the “Best Value ” on both its competitive source selections and noncompetitive acquisitions. The reader will come away with a...so that the government can obtain the “Best Value ” on both its competitive source selections and noncompetitive acquisitions. The reader will come
NASA Astrophysics Data System (ADS)
Pankow, James F.
Gas-particle partitioning is examined using a partitioning constant Kp = ( F/ TSP)/ A, where F (ng m -3) and A (ng m -3) are the particulate-associated and gas-phase concentrations, respectively, and TSP is the total suspended particulate matter level (μg m -3). Compound-dependent values of Kp depend on temperature ( T) according to Kp = mp/ T + bp. Limitations in data quality can cause errors in estimates of mp and bp obtained by simple linear regression (SLR). However, within a group of similar compounds, the bp values will be similar. By pooling data, an improved set of mp and a single bp can be obtained by common y-intercept regression (CYIR). SLR estimates for mp and bp for polycyclic aromatic hydrocarbons (PAHs) sorbing to urban Osaka particulate matter are available (Yamasaki et al., 1982, Envir. Sci. Technol.16, 189-194), as are CYIR estimates for the same particulate matter (Pankow, 1991, Atmospheric Environment25A, 2229-2239). In this work, a comparison was conducted of the ability of these two sets of mp and bp to predict A/ F ratios for PAHs based on measured T and TSP values for data obtained in other urban locations, specifically: (1) in and near the Baltimore Harbor Tunnel by Benner (1988, Ph.D thesis, University of Maryland) and Benner et al. (1989, Envir. Sci. Technol.23, 1269-1278); and (2) in Chicago by Cotham (1990, Ph.D. thesis, University of South Carolina). In general, the CYIR estimates for mp and bp obtained for Osaka particulate matter were found to be at least as reliable, and for some compounds more reliable than their SLR counterparts in predicting gas-particle ratios for PAHs. This result provides further evidence of the utility of the CYIR approach in quantitating the dependence of log Kp values on 1/ T.
NASA Astrophysics Data System (ADS)
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2017-08-01
An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Reexamination of optimal quantum state estimation of pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, A.; Hashimoto, T.; Horibe, M.
2005-09-15
A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less
Demand for health care in Denmark: results of a national sample survey using contingent valuation.
Gyldmark, M; Morrison, G C
2001-10-01
In this paper we use willingness to pay (WTP) to elicit values for private insurance covering treatment for four different health problems. By way of obtaining these values, we test the viability of the contingent valuation method (CVM) and econometric techniques, respectively, as means of eliciting and analysing values from the general public. WTP responses from a Danish national sample survey, which was designed in accordance with existing guidelines, are analysed in terms of consistency and validity checks. Large numbers of zero responses are common in WTP studies, and are found here; therefore, the Heckman selectivity model and log-transformed OLS are employed. The selectivity model is rejected, but test results indicate that the lognormal model yields efficient and unbiased estimates. The results give confidence in the WTP estimates obtained and, more generally, in CVM as a means of valuing publicly provided goods and in econometrics as a tool for analysing WTP results containing many zero responses.
NASA Technical Reports Server (NTRS)
Hefner, J. N.; Bushnell, D. M.
1980-01-01
The-state-of-the-art for the application of linear stability theory and the e to the nth power method for transition prediction and laminar flow control design are summarized, with analyses of previously published low disturbance, swept wing data presented. For any set of transition data with similar stream distrubance levels and spectra, the e to the nth power method for estimating the beginning of transition works reasonably well; however, the value of n can vary significantly, depending upon variations in disturbance field or receptivity. Where disturbance levels are high, the values of n are appreciably below the usual average value of 9 to 10 obtained for relatively low disturbance levels. It is recommended that the design of laminar flow control systems be based on conservative estimates of n and that, in considering the values of n obtained from different analytical approaches or investigations, the designer explore the various assumptions which entered into the analyses.
Geoelectric Hazard Maps for the Continental United States
NASA Technical Reports Server (NTRS)
Love, Jeffrey J.; Pulkkinen, Antti; Bedrosian, Paul A.; Jonas, Seth; Kelbert, Anna; Rigler, Joshua E.; Finn, Carol A.; Balch, Christopher C.; Rutledge, Robert; Waggle, Richard M.
2016-01-01
In support of a multiagency project for assessing induction hazards, we present maps of extreme-value geoelectric amplitudes over about half of the continental United States. These maps are constructed using a parameterization of induction: estimates of Earth surface impedance, obtained at discrete geographic sites from magnetotelluric survey data, are convolved with latitude-dependent statistical maps of extreme-value geomagnetic activity, obtained from decades of magnetic observatory data. Geoelectric amplitudes are estimated for geomagnetic waveforms having 240 s sinusoidal period and amplitudes over 10 min that exceed a once-per-century threshold. As a result of the combination of geographic differences in geomagnetic activity and Earth surface impedance, once-per-century geoelectric amplitudes span more than 2 orders of magnitude and are an intricate function of location. For north-south induction, once-per-century geoelectric amplitudes across large parts of the United States have a median value of 0.26 Vkm; for east-west geomagnetic variation the median value is 0.23 Vkm. At some locations,once-per-century geoelectric amplitudes exceed 3 Vkm.
Efficient bootstrap estimates for tail statistics
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Aarnes, Ole Johan
2017-03-01
Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.
Assessing the Non-Timber Value of Forests: A Revealed-Preference, Hedonic Model
Riccardo Scarpa; Joseph Buongiorno; Jiin-Shyang Hseu; Karen Lee Abt
2000-01-01
Based on revealed preference theory, the value of non-timber goods and services obtained by forest owners, private or public, should be at least equal to the difference between the value of what they could have cut had they tried to maximize timber revenues, and of what they actually cut. This definition was applied to estimate the non-timber value (NTV) of...
Estimation of leaf area index using WorldView-2 and Aster satellite image: a case study from Turkey.
Günlü, Alkan; Keleş, Sedat; Ercanlı, İlker; Şenyurt, Muammer
2017-10-04
The objective of this study is to estimate the leaf area index (LAI) of a forest ecosystem using two different satellite images, WorldView-2 and Aster. For this purpose, 108 sample plots were taken from pure Crimean pine forest stands of Yenice Forest Management Planning Unit in Ilgaz Forest Management Enterprise, Turkey. Each sample plot was imaged with hemispherical photographs with a fish-eye camera to determine the LAI. These photographs were analyzed with the help of Hemisfer Hemiview software program, and thus, the LAI of each sample plot was estimated. Furthermore, multiple regression analysis method was used to model the statistical relationships between the LAI values and band spectral reflection values and some vegetation indices (Vis) obtained from satellite images. The results show that the high-resolution WorldView-2 satellite image is better than the medium-resolution Aster satellite image in predicting the LAI. It was also seen that the results obtained by using the VIs are better than the bands when the LAI value is predicted with satellite images.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
In this paper, neurodynamic programming-based output feedback boundary control of distributed parameter systems governed by uncertain coupled semilinear parabolic partial differential equations (PDEs) under Neumann or Dirichlet boundary control conditions is introduced. First, Hamilton-Jacobi-Bellman (HJB) equation is formulated in the original PDE domain and the optimal control policy is derived using the value functional as the solution of the HJB equation. Subsequently, a novel observer is developed to estimate the system states given the uncertain nonlinearity in PDE dynamics and measured outputs. Consequently, the suboptimal boundary control policy is obtained by forward-in-time estimation of the value functional using a neural network (NN)-based online approximator and estimated state vector obtained from the NN observer. Novel adaptive tuning laws in continuous time are proposed for learning the value functional online to satisfy the HJB equation along system trajectories while ensuring the closed-loop stability. Local uniformly ultimate boundedness of the closed-loop system is verified by using Lyapunov theory. The performance of the proposed controller is verified via simulation on an unstable coupled diffusion reaction process.
Albedo estimation using near infrared photography at Glaciar Norte of Citlaltepetl Volcano (Mexico).
NASA Astrophysics Data System (ADS)
Ontiveros, Guillermo; Delgado-Granados, Hugo
2015-04-01
In this work we show preliminary results of the application of the methodology proposed by Corripio (2004) for albedo estimation of a glacial surface using oblique photography. This analysis was performed for the Glaciar Norte of Citlaltepetl volcano (Mexico), using images obtained with a modified digital camera for capturing the portion of the near infrared spectrum starting at 950 nm and a digital elevation model with a grid of 2 m. The main goal was to obtain a picture of the spatial distribution of albedo on the glacier, in order to find out if there was any morphological evidence of the influence of the glacier energy balance. Some of the obtained results show a certain spatial distribution with comparatively higher albedo values at the lower parts of the glacier as compared with higher parts. The higher values may correspond to different metamorphism of snow/ice at different heights due to the effects of lower slope. Corripio, J. G. (2004). Snow surface albedo estimation using terrestrial photography. International journal of remote sensing, 25(24), 5705-5729.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Environmental degradation and remediation: is economics part of the problem?
Dore, Mohammed H I; Burton, Ian
2003-01-01
It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.
Form of prior for constrained thermodynamic processes with uncertainty
NASA Astrophysics Data System (ADS)
Aneja, Preety; Johal, Ramandeep S.
2015-05-01
We consider the quasi-static thermodynamic processes with constraints, but with additional uncertainty about the control parameters. Motivated by inductive reasoning, we assign prior distribution that provides a rational guess about likely values of the uncertain parameters. The priors are derived explicitly for both the entropy-conserving and the energy-conserving processes. The proposed form is useful when the constraint equation cannot be treated analytically. The inference is performed using spin-1/2 systems as models for heat reservoirs. Analytical results are derived in the high-temperatures limit. An agreement beyond linear response is found between the estimates of thermal quantities and their optimal values obtained from extremum principles. We also seek an intuitive interpretation for the prior and the estimated value of temperature obtained therefrom. We find that the prior over temperature becomes uniform over the quantity kept conserved in the process.
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability riskmore » metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.« less
Evaluation of site effects in Loja basin (southern Ecuador)
NASA Astrophysics Data System (ADS)
Guartán, J.; Navarro, M.; Soto, J.
2013-05-01
Site effect assessment based on subsurface ground conditions is often crucial for estimating the urban seismic hazard. In order to evaluate the site effects in the intra-mountain basin of Loja (southern Ecuador), geological and geomorphological survey and ambient noise measurements were carried out. A classification of shallow geologic materials was performed through a geological cartography and the use of geotechnical data and geophysical surveys. Seven lithological formations have been analyzed, both in composition and thickness of existing materials. The shear-wave velocity structure in the center of the basin, composed by alluvial materials, was evaluated by means of inversion of Rayleigh wave dispersion data obtained from vertical-component array records of ambient noise. VS30 structure was estimated and an average value of 346 m s-1 was obtained. This value agrees with the results obtained from SPT N-value (306-368 m s-1). Short-period ambient noise observations were performed in 72 sites on a 500m × 500m dimension grid. The horizontal-to-vertical spectral ratio (HVSR) method was applied in order to determine a ground predominant period distribution map. This map reveals an irregular distribution of predominant period values, ranged from 0.1 to 1.0 s, according with the heterogeneity of the basin. Lower values of the period are found in the harder formation (Quillollaco formation), while higher values are predominantly obtained in alluvial formation. These results will be used in the evaluation of ground dynamic properties and will be included in seismic microzoning of Loja basin. Keywords: Landform classification, Ambient noise, SPAC method, Rayleigh waves, Shear velocity profile, Ground predominant period. ;
Extreme Mean and Its Applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.
1979-01-01
Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.
Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M
2017-03-01
Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.
[Value of the tritium test for determining the fat content in the body of rats].
Pisarchuk, K L
1990-01-01
An indirect method for estimation of the fat percentage in the animal organism, a tritium test, was studied on laboratory male rats aged 4 and 12 months. Results obtained from the tritium test and direct chemical analysis were compared. With age a mean absolute error of the tritium test increased (from 1 to 8%) as against actual values of the water and fat percentage in the organism obtained by a direct chemical analysis. The data obtained testify to the relative insolvency of the tritium test, as well as the necessity to carry additional investigations in order to obtain adequate data.
NASA Astrophysics Data System (ADS)
Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu
2015-12-01
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.
Uncertainty estimation of water levels for the Mitch flood event in Tegucigalpa
NASA Astrophysics Data System (ADS)
Fuentes Andino, D. C.; Halldin, S.; Lundin, L.; Xu, C.
2012-12-01
Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Simulation of elevated water surfaces provides a good way to understand the hydraulic mechanism of large flood events. In this study the one-dimensional HEC-RAS model for steady flow conditions together with the two-dimensional Lisflood-fp model were used to estimate the water level for the Mitch event in the river reaches at Tegucigalpa. Parameters uncertainty of the model was investigated using the generalized likelihood uncertainty estimation (GLUE) framework. Because of the extremely large magnitude of the Mitch flood, no hydrometric measurements were taken during the event. However, post-event indirect measurements of discharge and observed water levels were obtained in previous works by JICA and USGS. To overcome the problem of lacking direct hydrometric measurement data, uncertainty in the discharge was estimated. Both models could well define the value for channel roughness, though more dispersion resulted from the floodplain value. Analysis of the data interaction showed that there was a tradeoff between discharge at the outlet and floodplain roughness for the 1D model. The estimated discharge range at the outlet of the study area encompassed the value indirectly estimated by JICA, however the indirect method used by the USGS overestimated the value. If behavioral parameter sets can well reproduce water surface levels for past events such as Mitch, more reliable predictions for future events can be expected. The results acquired in this research will provide guidelines to deal with the problem of modeling past floods when no direct data was measured during the event, and to predict future large events taking uncertainty into account. The obtained range of the uncertain flood extension will be an outcome useful for decision makers.
Kawaguchi, Hiroyuki; Hashimoto, Hideki; Matsuda, Shinya
2012-09-22
The casemix-based payment system has been adopted in many countries, although it often needs complementary adjustment taking account of each hospital's unique production structure such as teaching and research duties, and non-profit motives. It has been challenging to numerically evaluate the impact of such structural heterogeneity on production, separately of production inefficiency. The current study adopted stochastic frontier analysis and proposed a method to assess unique components of hospital production structures using a fixed-effect variable. There were two stages of analyses in this study. In the first stage, we estimated the efficiency score from the hospital production function using a true fixed-effect model (TFEM) in stochastic frontier analysis. The use of a TFEM allowed us to differentiate the unobserved heterogeneity of individual hospitals as hospital-specific fixed effects. In the second stage, we regressed the obtained fixed-effect variable for structural components of hospitals to test whether the variable was explicitly related to the characteristics and local disadvantages of the hospitals. In the first analysis, the estimated efficiency score was approximately 0.6. The mean value of the fixed-effect estimator was 0.784, the standard deviation was 0.137, the range was between 0.437 and 1.212. The second-stage regression confirmed that the value of the fixed effect was significantly correlated with advanced technology and local conditions of the sample hospitals. The obtained fixed-effect estimator may reflect hospitals' unique structures of production, considering production inefficiency. The values of fixed-effect estimators can be used as evaluation tools to improve fairness in the reimbursement system for various functions of hospitals based on casemix classification.
NASA Astrophysics Data System (ADS)
Okamoto, R. J.; Clayton, E. H.; Bayly, P. V.
2011-10-01
Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G'' (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.
Okamoto, R J; Clayton, E H; Bayly, P V
2011-10-07
Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G″ (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.
Bedewi, Mohamed Abdelmohsen; Abodonya, Ahmed; Kotb, Mamdouh; Kamal, Sanaa; Mahmoud, Gehan; Aldossari, Khaled; Alqabbani, Abdullah; Swify, Sherine
2018-03-01
The objective of this study is to estimate the reference values for the lower limb peripheral nerves in adults.The demographics and physical characteristics of 69 adult healthy volunteers were evaluated and recorded. The estimated reference values and their correlations with the age, weight, height, body mass index (BMI) were evaluated.The cross sectional area reference values were obtained at 5 predetermined sites for 3 important lower limb peripheral nerves. Our CSA values correlated significantly with age, weight, and BMI. The normal reference values for each nerve were as follows: Tibial nerve at the popliteal fossa 19 mm ± 6.9, tibial nerve at the level of the medial malleolus 12.7 mm ± 4.5, common peroneal nerve at the popliteal fossa 9.5 mm ± 4, common peroneal nerve fibular head 8.9 mm ± 3.2, sural nerve 3.5 mm ± 1.4.The reference values for the lower limb peripheral nerves were identified. These values could be used for future management of peripheral nerve disorders.
Hevesi, Joseph A.; Istok, Jonathan D.; Flint, Alan L.
1992-01-01
Values of average annual precipitation (AAP) are desired for hydrologic studies within a watershed containing Yucca Mountain, Nevada, a potential site for a high-level nuclear-waste repository. Reliable values of AAP are not yet available for most areas within this watershed because of a sparsity of precipitation measurements and the need to obtain measurements over a sufficient length of time. To estimate AAP over the entire watershed, historical precipitation data and station elevations were obtained from a network of 62 stations in southern Nevada and southeastern California. Multivariate geostatistics (cokriging) was selected as an estimation method because of a significant (p = 0.05) correlation of r = .75 between the natural log of AAP and station elevation. A sample direct variogram for the transformed variable, TAAP = ln [(AAP) 1000], was fitted with an isotropic, spherical model defined by a small nugget value of 5000, a range of 190 000 ft, and a sill value equal to the sample variance of 163 151. Elevations for 1531 additional locations were obtained from topographic maps to improve the accuracy of cokriged estimates. A sample direct variogram for elevation was fitted with an isotropic model consisting of a nugget value of 5500 and three nested transition structures: a Gaussian structure with a range of 61 000 ft, a spherical structure with a range of 70 000 ft, and a quasi-stationary, linear structure. The use of an isotropic, stationary model for elevation was considered valid within a sliding-neighborhood radius of 120 000 ft. The problem of fitting a positive-definite, nonlinear model of coregionalization to an inconsistent sample cross variogram for TAAP and elevation was solved by a modified use of the Cauchy-Schwarz inequality. A selected cross-variogram model consisted of two nested structures: a Gaussian structure with a range of 61 000 ft and a spherical structure with a range of 190 000 ft. Cross validation was used for model selection and for comparing the geostatistical model with six alternate estimation methods. Multivariate geostatistics provided the best cross-validation results.
Estimating sediment discharge: Appendix D
Gray, John R.; Simões, Francisco J. M.
2008-01-01
Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with different types of bed-load samplers may not be comparable (Gray et al. 1991; Childers 1999; Edwards and Glysson 1999). The total suspended solids (TSS) analytical method tends to produce concentration data from open-channel flows that are biased low with respect to their paired suspended-sediment concentration values, particularly when sand-size material composes more than about a quarter of the material in suspension. Instantaneous sediment-discharge values based on TSS data may differ from the more reliable product of suspended- sediment concentration values and the same water-discharge data by an order of magnitude (Gray et al. 2000; Bent et al. 2001; Glysson et al. 2000; 2001). An assessment of data comparability and reliability is an important first step in the estimation of sediment discharges. There are two approaches to obtaining values describing sediment loads in streams. One is based on direct measurement of the quantities of interest, and the other on relations developed between hydraulic parameters and sediment- transport potential. In the next sections, the most common techniques for both approaches are briefly addressed.
Uses and capabilities of electronic capacitance instruments for estimating standing herbage
P. O. Currie; M. J. Morris; D. L. Neal
1973-01-01
An electronic capacitance meter was used to estimate herbage yield from sown ranges in western USA. On an area in Arizona where the grass stand had been sown broadcast, ^a r2 of 0-47 was obtained between the meter value and oven-dry weight estimate. Excluding those plots with very large amounts of standing dead organic matter (OM), or very succulent plants...
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
Isotherm, kinetic, and thermodynamic study of ciprofloxacin sorption on sediments.
Mutavdžić Pavlović, Dragana; Ćurković, Lidija; Grčić, Ivana; Šimić, Iva; Župan, Josip
2017-04-01
In this study, equilibrium isotherms, kinetics and thermodynamics of ciprofloxacin on seven sediments in a batch sorption process were examined. The effects of contact time, initial ciprofloxacin concentration, temperature and ionic strength on the sorption process were studied. The K d parameter from linear sorption model was determined by linear regression analysis, while the Freundlich and Dubinin-Radushkevich (D-R) sorption models were applied to describe the equilibrium isotherms by linear and nonlinear methods. The estimated K d values varied from 171 to 37,347 mL/g. The obtained values of E (free energy estimated from D-R isotherm model) were between 3.51 and 8.64 kJ/mol, which indicated a physical nature of ciprofloxacin sorption on studied sediments. According to obtained n values as measure of intensity of sorption estimate from Freundlich isotherm model (from 0.69 to 1.442), ciprofloxacin sorption on sediments can be categorized from poor to moderately difficult sorption characteristics. Kinetics data were best fitted by the pseudo-second-order model (R 2 > 0.999). Thermodynamic parameters including the Gibbs free energy (ΔG°), enthalpy (ΔH°) and entropy (ΔS°) were calculated to estimate the nature of ciprofloxacin sorption. Results suggested that sorption on sediments was a spontaneous exothermic process.
NASA Astrophysics Data System (ADS)
Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi
2016-04-01
The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.
NASA Astrophysics Data System (ADS)
Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens
2018-02-01
Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna
2013-11-15
Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
Park, Haejun; Rangwala, Ali S; Dembsey, Nicholas A
2009-08-30
A method to estimate thermal and kinetic parameters of Pittsburgh seam coal subject to thermal runaway is presented using the standard ASTM E 2021 hot surface ignition test apparatus. Parameters include thermal conductivity (k), activation energy (E), coupled term (QA) of heat of reaction (Q) and pre-exponential factor (A) which are required, but rarely known input values to determine the thermal runaway propensity of a dust material. Four different dust layer thicknesses: 6.4, 12.7, 19.1 and 25.4mm, are tested, and among them, a single steady state dust layer temperature profile of 12.7 mm thick dust layer is used to estimate k, E and QA. k is calculated by equating heat flux from the hot surface layer and heat loss rate on the boundary assuming negligible heat generation in the coal dust layer at a low hot surface temperature. E and QA are calculated by optimizing a numerically estimated steady state dust layer temperature distribution to the experimentally obtained temperature profile of a 12.7 mm thick dust layer. Two unknowns, E and QA, are reduced to one from the correlation of E and QA obtained at criticality of thermal runaway. The estimated k is 0.1 W/mK matching the previously reported value. E ranges from 61.7 to 83.1 kJ/mol, and the corresponding QA ranges from 1.7 x 10(9) to 4.8 x 10(11)J/kg s. The mean values of E (72.4 kJ/mol) and QA (2.8 x 10(10)J/kg s) are used to predict the critical hot surface temperatures for other thicknesses, and good agreement is observed between measured and experimental values. Also, the estimated E and QA ranges match the corresponding ranges calculated from the multiple tests method and values reported in previous research.
A dye-binding assay for measurement of the binding of Cu(II) to proteins.
Wilkinson-White, Lorna E; Easterbrook-Smith, Simon B
2008-10-01
We analysed the theory of the coupled equilibria between a metal ion, a metal ion-binding dye and a metal ion-binding protein in order to develop a procedure for estimating the apparent affinity constant of a metal ion:protein complex. This can be done by analysing from measurements of the change in the concentration of the metal ion:dye complex with variation in the concentration of either the metal ion or the protein. Using experimentally determined values for the affinity constant of Cu(II) for the dye, 2-(5-bromo-2-pyridylaxo)-5-(N-propyl-N-sulfopropylamino) aniline (5-Br-PSAA), this procedure was used to estimate the apparent affinity constants for formation of Cu(II):transthyretin, yielding values which were in agreement with literature values. An apparent affinity constant for Cu(II) binding to alpha-synuclein of approximately 1 x 10(9)M(-1) was obtained from measurements of tyrosine fluorescence quenching by Cu(II). This value was in good agreement with that obtained using 5-Br-PSAA. Our analysis and data therefore show that measurement of changes in the equilibria between Cu(II) and 5-Br-PSAA by Cu(II)-binding proteins provides a general procedure for estimating the affinities of proteins for Cu(II).
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
Quantitative estimation of film forming polymer-plasticizer interactions by the Lorentz-Lorenz Law.
Dredán, J; Zelkó, R; Dávid, A Z; Antal, I
2006-03-09
Molar refraction as well as refractive index has many uses. Beyond confirming the identity and purity of a compound, determination of molecular structure and molecular weight, molar refraction is also used in other estimation schemes, such as in critical properties, surface tension, solubility parameter, molecular polarizability, dipole moment, etc. In the present study molar refraction values of polymer dispersions were determined for the quantitative estimation of film forming polymer-plasticizer interactions. Information can be obtained concerning the extent of interaction between the polymer and the plasticizer from the calculation of molar refraction values of film forming polymer dispersions containing plasticizer.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Chase, Robert
1992-01-01
Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.
DEKF system for crowding estimation by a multiple-model approach
NASA Astrophysics Data System (ADS)
Cravino, F.; Dellucca, M.; Tesei, A.
1994-03-01
A distributed extended Kalman filter (DEKF) network devoted to real-time crowding estimation for surveillance in complex scenes is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Feature values are associated by virtual sensors with the estimated number of people using nonlinear models obtained in an off-line training phase. Different models are used, depending on the positions and dimensions of the crowded subareas detected in each image.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Volatile organic compound (VOC) emissions during malting and beer manufacture
NASA Astrophysics Data System (ADS)
Gibson, Nigel B.; Costigan, Gavin T.; Swannell, Richard P. J.; Woodfield, Michael J.
Estimates have been made of the amounts of volatile organic compounds (VOCs) released during different stages of beer manufacture. The estimates are based on recent measurements and plant specification data supplied by manufacturers. Data were obtained for three main manufacturing processes (malting, wort processing and fermentation) for three commercial beer types. Some data on the speciation of emitted compounds have been obtained. Based on these measurements, an estimate of the total unabated VOC emission. from the U.K. brewing industry was calculated as 3.5 kta -1, over 95% of which was generated during barley malting. This value does not include any correction for air pollution control.
Boundary assessment under uncertainty: A case study
Pawlowsky, V.; Olea, R.A.; Davis, J.C.
1993-01-01
Estimating certain attributes within a geological body whose exact boundary is not known presents problems because of the lack of information. Estimation may result in values that are inadmissible from a geological point of view, especially with attributes which necessarily must be zero outside the boundary, such as the thickness of the oil column outside a reservoir. A simple but effective way to define the boundary is to use indicator kriging in two steps, the first for the purpose of extrapolating control points outside the body, the second to obtain a weighting function which expresses the uncertainty attached to estimations obtained in the boundary region. ?? 1993 International Association for Mathematical Geology.
Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G
2014-07-02
Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.
Neural-Network Approach to Hyperspectral Data Analysis for Volcanic Ash Clouds Monitoring
NASA Astrophysics Data System (ADS)
Piscini, Alessandro; Ventress, Lucy; Carboni, Elisa; Grainger, Roy Gordon; Del Frate, Fabio
2015-11-01
In this study three artificial neural networks (ANN) were implemented in order to emulate a retrieval model and to estimate the ash Aerosol optical Depth (AOD), particle effective radius (reff) and cloud height from volcanic eruption using hyperspectral remotely sensed data. ANNs were trained using a selection of Infrared Atmospheric Sounding Interferometer (IASI) channels in Thermal Infrared (TIR) as inputs, and the corresponding ash parameters retrieved obtained using the Oxford retrievals as target outputs. The retrieval is demonstrated for the eruption of the Eyjafjallajo ̈kull volcano (Iceland) occurred in 2010. The results of validation provided root mean square error (RMSE) values between neural network outputs and targets lower than standard deviation (STD) of corresponding target outputs, therefore demonstrating the feasibility to estimate volcanic ash parameters using an ANN approach, and its importance in near real time monitoring activities, owing to its fast application. A high accuracy has been achieved for reff and cloud height estimation, while a decreasing in accuracy was obtained when applying the NN approach for AOD estimation, in particular for those values not well characterized during NN training phase.
Measurement of Refractive Indices of CdSiP2 at Temperatures from 90 to 450 K (Postprint)
2018-01-05
0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...Sellmeier equation was obtained, for the first time to our knowledge over the temperature range 90 to 450 K. The index values were used to calculate...was obtained for the first time to our knowledge over the temperature range 90 to 450 K. The index values were used to calculate the crystal
Modelling of the UV Index on vertical and 40° tilted planes for different orientations.
Serrano, D; Marín, M J; Utrillas, M P; Tena, F; Martínez-Lozano, J A
2012-02-01
In this study, estimated data of the UV Index on vertical planes are presented for the latitude of Valencia, Spain. For that purpose, the UVER values have been generated on vertical planes by means of four different geometrical models a) isotropic, b) Perez, c) Gueymard, d) Muneer, based on values of the global horizontal UVER and the diffuse horizontal UVER, measured experimentally. The UVER values, obtained by any model, overestimate the experimental values for all orientations, with the exception of the Perez model for the East plane. The results show statistical values of the MAD parameter (Mean Absolute Deviation) between 10% and 25%, the Perez model being the one that obtained a lower MAD for all levels. As for the statistic RMSD parameter (Root Mean Square Deviation), the results show values between 17% and 32%, and again the Perez model provides the best results in all vertical planes. The difference between the estimated UV Index and the experimental UV Index, for vertical and 40° tilted planes, was also calculated. 40° is an angle close to the latitude of Burjassot, Valencia, (39.5°), which, according to various studies, is the optimum angle to capture maximum radiation on tilted planes. We conclude that the models provide a good estimate of the UV Index, as they coincide or differ in one unit compared to the experimental values in 99% of cases, and this is valid for all orientations. Finally, we examined the relation between the UV Index on vertical and 40° tilted planes, both the experimental and estimated by the Perez model, and the experimental UV Index on a horizontal plane at 12 GMT. Based on the results, we can conclude that it is possible to estimate with a good approximation the UV Index on vertical and 40° tilted planes in different directions on the basis of the experimental horizontal UVI value, thus justifying the interest of this study. This journal is © The Royal Society of Chemistry and Owner Societies 2012
Age Estimation of Infants Through Metric Analysis of Developing Anterior Deciduous Teeth.
Viciano, Joan; De Luca, Stefano; Irurita, Javier; Alemán, Inmaculada
2018-01-01
This study provides regression equations for estimation of age of infants from the dimensions of their developing deciduous teeth. The sample comprises 97 individuals of known sex and age (62 boys, 35 girls), aged between 2 days and 1,081 days. The age-estimation equations were obtained for the sexes combined, as well as for each sex separately, thus including "sex" as an independent variable. The values of the correlations and determination coefficients obtained for each regression equation indicate good fits for most of the equations obtained. The "sex" factor was statistically significant when included as an independent variable in seven of the regression equations. However, the "sex" factor provided an advantage for age estimation in only three of the equations, compared to those that did not include "sex" as a factor. These data suggest that the ages of infants can be accurately estimated from measurements of their developing deciduous teeth. © 2017 American Academy of Forensic Sciences.
Radiation exposure in interventional radiology
NASA Astrophysics Data System (ADS)
Pinto, N. G. V.; Braz, D.; Vallim, M. A.; Filho, L. G. P.; Azevedo, F. S.; Barroso, R. C.; Lopes, R. T.
2007-09-01
The aim of this study is to evaluate dose values in patients and staff involved in some interventional radiology procedures. Doses have been measured using thermoluminescent dosemeters for single procedures (such as renal and cerebral arteriography, transjungular intrahepatic portasystemic shunt (TIPS) and chemoembolization). The magnitude of doses through the hands of interventional radiologists has been studied. Dose levels were evaluated in three points for patients (eye, thyroid and gonads). The dose-area product (DAP) was also investigated using a Diamentor (PTW-M2). The dose in extremities was estimated for a professional who generally performed one TIPS, two chemoembolizations, two cerebral arteriographies and two renal arteriographies in a week. The estimated annual radiation dose was converted to effective dose as suggested by the 453-MS/Brazil norm The annual dose values were 137.25 mSv for doctors, 40.27 mSv for nurses and 51.95 mSv for auxiliary doctors, and all these annual dose values are below the limit established. The maximum values of the dose obtained for patients were 6.91, 10.92 and 15.34 mGy close to eye, thyroid and gonads, respectively. The DAP values were evaluated for patients in the same interventional radiology procedures. The dose and DAP values obtained are in agreement with values encountered in the literature.
Variability of the Bering Sea Circulation in the Period 1992-2010
2012-06-09
mas- sive sources of data (satellite altimetry, Argo drifters) may improve the accuracy of these estimates in the near future. Large-scale...Combining these data with in situ observations of temperature, salinity and subsurface currents allowed obtaining increasingly accurate estimates ...al. (2006) esti- mated the Kamchatka Current transport of 24 Sv (1 Sv = 106 m?/s), a value significantly higher than pre- vious estimates of
Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients
Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong
2015-01-01
♦ Objectives: To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Methods: Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ Results: The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Conclusions: Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. PMID:26293839
Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients.
Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong
2015-12-01
♦ To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. Copyright © 2015 International Society for Peritoneal Dialysis.
The effect of land cover change to the biomass value in the forest region of West Java province
NASA Astrophysics Data System (ADS)
Rahayu, M. I.; Waryono, T.; Rokhmatullah; Shidiq, I. P. A.
2018-05-01
Due to the issue of climate change as a public concern, information of carbon stock availability play an important role to describe the condition of forest ecosystems in the context of sustainable forest management. This study has the objective to identify land cover change during 2 decades (1996 – 2016) in the forest region and estimate the value of forest carbon stocks in west Java Province using remote sensing imagery. The land cover change information was obtained by visually interpreting the Landsat image, while the estimation of the carbon stock value was performed using the transformation of the NDVI (Normalized Difference Vegetation Index) which extracted from Landsat image. Biomass value is calculated by existing allometric equations. The results of this study shows that the forest area in the forest region of West Java Province have decreased from year to year, and the estimation value of forest carbon stock in the forest region of West Java Province also decreased from year to year.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
Variability of dental cone beam CT grey values for density estimations
Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K
2013-01-01
Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537
NASA Technical Reports Server (NTRS)
Wiesnet, D. R.; Mcginnis, D. F., Jr.; Matson, M. (Principal Investigator)
1978-01-01
The author has identified the following significant results. Estimates of soil moisture were obtained from visible, near-IR gamma ray and microwave data. Attempts using GOES thermal-IR were unsuccessful due to resolutions (8 km). Microwaves were the most effective at soil moisture estimates, with and without vegetative cover. Gamma rays provided only one value for the test site, produced by many data points obtained from overlapping 150 meter diameter circles. Even though the resulting averaged value was near the averaged field moisture value, this method suffers from atmospheric contaminants, the need to fly at low altitudes, and the necessity of prior calibration of a given site. Visible and near-IR relationships are present for bare fields but appear to be limited to soil moisture levels between 5 and 20%. The densely vegetated alfalfa fields correlated with near-IR reflectance only; soil moisture values from wheat fields showed no relation to either or near-IR MSS data.
Geoelectric hazard maps for the continental United States
NASA Astrophysics Data System (ADS)
Love, Jeffrey J.; Pulkkinen, Antti; Bedrosian, Paul A.; Jonas, Seth; Kelbert, Anna; Rigler, E. Joshua; Finn, Carol A.; Balch, Christopher C.; Rutledge, Robert; Waggel, Richard M.; Sabata, Andrew T.; Kozyra, Janet U.; Black, Carrie E.
2016-09-01
In support of a multiagency project for assessing induction hazards, we present maps of extreme-value geoelectric amplitudes over about half of the continental United States. These maps are constructed using a parameterization of induction: estimates of Earth surface impedance, obtained at discrete geographic sites from magnetotelluric survey data, are convolved with latitude-dependent statistical maps of extreme-value geomagnetic activity, obtained from decades of magnetic observatory data. Geoelectric amplitudes are estimated for geomagnetic waveforms having 240 s sinusoidal period and amplitudes over 10 min that exceed a once-per-century threshold. As a result of the combination of geographic differences in geomagnetic activity and Earth surface impedance, once-per-century geoelectric amplitudes span more than 2 orders of magnitude and are an intricate function of location. For north-south induction, once-per-century geoelectric amplitudes across large parts of the United States have a median value of 0.26 V/km; for east-west geomagnetic variation the median value is 0.23 V/km. At some locations, once-per-century geoelectric amplitudes exceed 3 V/km.
Geoelectric hazard maps for the continental United States
Love, Jeffrey J.; Pulkkinen, Antti; Bedrosian, Paul A.; Jonas, Seth; Kelbert, Anna; Rigler, Erin (Josh); Finn, Carol; Balch, Christopher; Rutledge, Robert; Waggel, Richard; Sabata, Andrew; Kozyra, Janet; Black, Carrie
2016-01-01
In support of a multiagency project for assessing induction hazards, we present maps of extreme-value geoelectric amplitudes over about half of the continental United States. These maps are constructed using a parameterization of induction: estimates of Earth surface impedance, obtained at discrete geographic sites from magnetotelluric survey data, are convolved with latitude-dependent statistical maps of extreme-value geomagnetic activity, obtained from decades of magnetic observatory data. Geoelectric amplitudes are estimated for geomagnetic waveforms having 240 s sinusoidal period and amplitudes over 10 min that exceed a once-per-century threshold. As a result of the combination of geographic differences in geomagnetic activity and Earth surface impedance, once-per-century geoelectric amplitudes span more than 2 orders of magnitude and are an intricate function of location. For north-south induction, once-per-century geoelectric amplitudes across large parts of the United States have a median value of 0.26 V/km; for east-west geomagnetic variation the median value is 0.23 V/km. At some locations, once-per-century geoelectric amplitudes exceed 3 V/km.
Estimation of soil hydraulic properties with microwave techniques
NASA Technical Reports Server (NTRS)
Oneill, P. E.; Gurney, R. J.; Camillo, P. J.
1985-01-01
Useful quantitative information about soil properties may be obtained by calibrating energy and moisture balance models with remotely sensed data. A soil physics model solves heat and moisture flux equations in the soil profile and is driven by the surface energy balance. Model generated surface temperature and soil moisture and temperature profiles are then used in a microwave emission model to predict the soil brightness temperature. The model hydraulic parameters are varied until the predicted temperatures agree with the remotely sensed values. This method is used to estimate values for saturated hydraulic conductivity, saturated matrix potential, and a soil texture parameter. The conductivity agreed well with a value measured with an infiltration ring and the other parameters agreed with values in the literature.
Rossi, Carla
2013-06-01
The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
Wang, Frank; Pan, Kuang-Tse; Chu, Sung-Yu; Chan, Kun-Ming; Chou, Hong-Shiue; Wu, Ting-Jung; Lee, Wei-Chen
2011-04-01
An accurate preoperative estimate of the graft weight is vital to avoid small-for-size syndrome in the recipient and ensure donor safety after adult living donor liver transplantation (LDLT). Here we describe a simple method for estimating the graft volume (GV) that uses the maximal right portal vein diameter (RPVD) and the maximal left portal vein diameter (LPVD). Between June 2004 and December 2009, 175 consecutive donors undergoing right hepatectomy for LDLT were retrospectively reviewed. The GV was determined with 3 estimation methods: (1) the radiological graft volume (RGV) estimated by computed tomography (CT) volumetry; (2) the computed tomography-calculated graft volume (CGV-CT), which was obtained by the multiplication of the standard liver volume (SLV) by the RGV percentage with respect to the total liver volume derived from CT; and (3) the portal vein diameter ratio-calculated graft volume (CGV-PVDR), which was obtained by the multiplication of the SLV by the portal vein diameter ratio [PVDR; ie, PVDR = RPVD(2) /(RPVD(2) + LPVD(2) )]. These values were compared to the actual graft weight (AGW), which was measured intraoperatively. The mean AGW was 633.63 ± 107.51 g, whereas the mean RGV, CGV-CT, and CGV-PVDR values were 747.83 ± 138.59, 698.21 ± 94.81, and 685.20 ± 90.88 cm(3) , respectively. All 3 estimation methods tended to overestimate the AGW (P < 0.001). The actual graft-to-recipient body weight ratio (GRWR) was 1.00% ± 0.19%, and the GRWRs calculated on the basis of the RGV, CGV-CT, and CGV-PVDR values were 1.19% ± 0.25%, 1.11% ± 0.22%, and 1.09% ± 0.21%, respectively. Overall, the CGV-PVDR values better correlated with the AGW and GRWR values according to Lin's concordance correlation coefficient and the Landis and Kock benchmark. In conclusion, the PVDR method is a simple estimation method that accurately predicts GVs and GRWRs in adult LDLT. Copyright © 2011 American Association for the Study of Liver Diseases.
Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.
Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller
2015-01-01
An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
NASA Astrophysics Data System (ADS)
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
USDA-ARS?s Scientific Manuscript database
Knowledge of the extent of the symptoms of a plant disease, generally referred to as severity, is key to both fundamental and applied aspects of plant pathology. Most commonly, severity is obtained visually and the accuracy of each estimate (closeness to the actual value) by individual raters is par...
Research on dynamic creep strain and settlement prediction under the subway vibration loading.
Luo, Junhui; Miao, Linchang
2016-01-01
This research aims to explore the dynamic characteristics and settlement prediction of soft soil. Accordingly, the dynamic shear modulus formula considering the vibration frequency was utilized and the dynamic triaxial test conducted to verify the validity of the formula. Subsequently, the formula was applied to the dynamic creep strain function, with the factors influencing the improved dynamic creep strain curve of soft soil being analyzed. Meanwhile, the variation law of dynamic stress with sampling depth was obtained through the finite element simulation of subway foundation. Furthermore, the improved dynamic creep strain curve of soil layer was determined based on the dynamic stress. Thereafter, it could to estimate the long-term settlement under subway vibration loading by norms. The results revealed that the dynamic shear modulus formula is straightforward and practical in terms of its application to the vibration frequency. The values predicted using the improved dynamic creep strain formula closed to the experimental values, whilst the estimating settlement closed to the measured values obtained in the field test.
A model for prediction of color change after tooth bleaching based on CIELAB color space
NASA Astrophysics Data System (ADS)
Herrera, Luis J.; Santana, Janiley; Yebra, Ana; Rivas, María. José; Pulgar, Rosa; Pérez, María. M.
2017-08-01
An experimental study aiming to develop a model based on CIELAB color space for prediction of color change after a tooth bleaching procedure is presented. Multivariate linear regression models were obtained to predict the L*, a*, b* and W* post-bleaching values using the pre-bleaching L*, a*and b*values. Moreover, univariate linear regression models were obtained to predict the variation in chroma (C*), hue angle (h°) and W*. The results demonstrated that is possible to estimate color change when using a carbamide peroxide tooth-bleaching system. The models obtained can be applied in clinic to predict the colour change after bleaching.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Synoptic scale wind field properties from the SEASAT SASS
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.; Sylvester, W. B.; Salfi, R. E.
1984-01-01
Dealiased SEASAT SEASAT A Scatterometer System SASS vector winds obtained during the Gulf Of Alaska SEASAT Experiment GOASEX program are processed to obtain superobservations centered on a one degree by one degree grid. The grid. The results provide values for the combined effects of mesoscale variability and communication noise on the individual SASS winds. These superobservations winds are then processed further to obtain estimates of synoptic scale vector winds stress fields, the horizontal divergence of the wind, the curl of the wind stress and the vertical velocity at 200 m above the sea surface, each with appropriate standard deviations of the estimates for each grid point value. They also explain the concentration of water vapor, liquid water and precipitation found by means of the SMMR Scanning Multichannel Microwave Radiometer at fronts and occlusions in terms of strong warm, moist air advection in the warm air sector accompanied by convergence in the friction layer. Their quality is far superior to that of analyses based on conventional data, which are shown to yield many inconsistencies.
Tran, Hanh T M; Stephenson, Steven L; Tullis, Jason A
2015-01-01
The conventional method used to assess growth of the plasmodium of the slime mold Physarum polycephalum in solid culture is to measure the extent of plasmodial expansion from the point of inoculation by using a ruler. However, plasmodial growth is usually rather irregular, so the values obtained are not especially accurate. Similar challenges exist in quantification of the growth of a fungal mycelium. In this paper, we describe a method that uses geographic information system software to obtain highly accurate estimates of plasmodial growth over time. This approach calculates plasmodial area from images obtained at particular intervals following inoculation. In addition, the correlation between plasmodial area and its dry cell weight value was determined. The correlation could be used for biomass estimation without the need of having to terminate the cultures in question. The method described herein is simple but effective and could also be used for growth measurements of other microorganisms such as fungi on solid media.
NASA Astrophysics Data System (ADS)
Chang, Liang Cheng; Tsai, Jui pin; Chen, Yu Wen; Way Hwang, Chein; Chung Cheng, Ching; Chiang, Chung Jung
2014-05-01
For sustainable management, accurate estimation of recharge can provide critical information. The accuracy of estimation is highly related to uncertainty of specific yield (Sy). Because Sy value is traditionally obtained by a multi-well pumping test, the available Sy values are usually limited due to high installation cost. Therefore, this information insufficiency of Sy may cause high uncertainty for recharge estimation. Because gravity is a function of a material mass and the inverse square of the distance, gravity measurement can assist to obtain the mass variation of a shallow groundwater system. Thus, the groundwater level observation data and gravity measurements are used for the calibration of Sy for a groundwater model. The calibration procedure includes four steps. First, gravity variations of three groundwater-monitoring wells, Si-jhou, Tu-ku and Ke-cuo, are observed in May, August and November 2012. To obtain the gravity caused by groundwater variation, this study filters the noises from other sources, such as ocean tide and land subsidence, in the collected data The refined data, which are data without noises, are named gravity residual. Second, this study develops a groundwater model using MODFLOW 2005 to simulate the water mass variation of the groundwater system. Third, we use Newton gravity integral to simulate the gravity variation caused by the simulated water mass variation during each of the observation periods. Fourth, comparing the ratio of the gravity variation between the two data sets, which are observed gravity residuals and simulated gravities. The values of Sy is continuously modified until the gravity variation ratios of the two data sets are the same. The Sy value of Si-jhou is 0.216, which is obtained by the multi-well pumping test. This Sy value is assigned to the simulation model. The simulation results show that the simulated gravity can well fit the observed gravity residual without parameter calibration. This result indicates that the proposed approach is correct and reasonable. In Tu-ku and Ke-cuo, the ratios of the gravity variation between observed gravity residuals and simulated gravities are approximate 1.8 and 50, respectively. The Sy values of these two stations are modified 1.8 and 50 times the original values. These modified Sy values are assigned to the groundwater morel. After the parameter re-assignment, the simulated gravities meet the gravity residuals in these two stations. In conclusion, the study results show that the proposed approach has the potential to identify Sy without installing wells. Therefore, the proposed approach can be used to increase the spatial density of Sy and can conduct the recharge estimation with low uncertainty.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
IR spectroscopy as a source of data on bond strengths
NASA Astrophysics Data System (ADS)
Finkelshtein, E. I.; Shamsiev, R. S.
2018-02-01
The aim of this work is the estimation of double bond strength, namely Cdbnd O bonds in ketones and aldehydes and Cdbnd C bonds in various compounds. By the breaking of these bonds one or both fragments formed are carbenes, for which experimental data on the enthalpies of formation (ΔHf298) are scarce. Thus for the estimation of ΔHf298 of the corresponding carbenes, the empirical equations were proposed based on different approximations. In addition, a quantum chemical calculations of the ΔHf298 values of carbenes were performed, and the data obtained were compared with experimental values and the results of earlier calculations. Equations for the calculation of Cdbnd O bond strengths of different ketones and aldehydes from the corresponding stretching frequencies ν(Cdbnd O) were derived. Using the proposed equations, the strengths of Cdbnd O bonds of 25 ketones and 12 conjugated aldehydes, as well as Cdbnd C bonds of 13 hydrocarbons and 7 conjugated aldehydes were estimated for the first time. Linear correlations of Cdbnd C and Cdbnd O bond strengths with the bond lengths were established, and the equations permitting the estimation of the double bond strengths and lengths with acceptable accuracy were obtained. Also, the strength of central Cdbnd C bond of stilbene was calculated for the first time. The uncertainty of the strengths of double bonds obtained may be regarded as accurate ±10-15 kJ/mol.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Blow-up for a three dimensional Keller-Segel model with consumption of chemoattractant
NASA Astrophysics Data System (ADS)
Jiang, Jie; Wu, Hao; Zheng, Songmu
2018-04-01
We investigate blow-up properties for the initial-boundary value problem of a Keller-Segel model with consumption of chemoattractant when the spatial dimension is three. Through a kinetic reformulation of the Keller-Segel system, we first derive some higher-order estimates and obtain certain blow-up criteria for the local classical solutions. These blow-up criteria generalize the results in [4,5] from the whole space R3 to the case of bounded smooth domain Ω ⊂R3. Lower global blow-up estimate on ‖ n ‖ L∞ (Ω) is also obtained based on our higher-order estimates. Moreover, we prove local non-degeneracy for blow-up points.
Value of Information Analysis for Time-lapse Seismic Data by Simulation-Regression
NASA Astrophysics Data System (ADS)
Dutta, G.; Mukerji, T.; Eidsvik, J.
2016-12-01
A novel method to estimate the Value of Information (VOI) of time-lapse seismic data in the context of reservoir development is proposed. VOI is a decision analytic metric quantifying the incremental value that would be created by collecting information prior to making a decision under uncertainty. The VOI has to be computed before collecting the information and can be used to justify its collection. Previous work on estimating the VOI of geophysical data has involved explicit approximation of the posterior distribution of reservoir properties given the data and then evaluating the prospect values for that posterior distribution of reservoir properties. Here, we propose to directly estimate the prospect values given the data by building a statistical relationship between them using regression. Various regression techniques such as Partial Least Squares Regression (PLSR), Multivariate Adaptive Regression Splines (MARS) and k-Nearest Neighbors (k-NN) are used to estimate the VOI, and the results compared. For a univariate Gaussian case, the VOI obtained from simulation-regression has been shown to be close to the analytical solution. Estimating VOI by simulation-regression is much less computationally expensive since the posterior distribution of reservoir properties given each possible dataset need not be modeled and the prospect values need not be evaluated for each such posterior distribution of reservoir properties. This method is flexible, since it does not require rigid model specification of posterior but rather fits conditional expectations non-parametrically from samples of values and data.
Willcock, Simon; Phillips, Oliver L.; Platts, Philip J.; Balmford, Andrew; Burgess, Neil D.; Lovett, Jon C.; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L.; Marchant, Rob; Marshall, Andrew R.; Mbilinyi, Boniface; Munishi, Pantaleon K. T.; Owen, Nisha; Swetnam, Ruth D.; Topp-Jorgensen, Elmer J.; Lewis, Simon L.
2012-01-01
Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as ‘lowland tropical forest’ are often used, termed ‘Tier 1 type’ analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC ‘Tier 2’ reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92–6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively low investment. PMID:23024764
Willcock, Simon; Phillips, Oliver L; Platts, Philip J; Balmford, Andrew; Burgess, Neil D; Lovett, Jon C; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L; Marchant, Rob; Marshall, Andrew R; Mbilinyi, Boniface; Munishi, Pantaleon K T; Owen, Nisha; Swetnam, Ruth D; Topp-Jorgensen, Elmer J; Lewis, Simon L
2012-01-01
Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively low investment.
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Vasilenko, Eugene; Volkova, Elena; Kukharsky, Alexander
2017-04-01
The model of water and heat exchange between vegetation covered territory and atmosphere (LSM, Land Surface Model) for vegetation season has been developed to calculate soil water content, evapotranspiration, infiltration of water into the soil, vertical latent and sensible heat fluxes and other water and heat balances components as well as soil surface and vegetation cover temperatures and depth distributions of moisture and temperature. The LSM is suited for utilizing satellite-derived estimates of precipitation, land surface temperature and vegetation characteristics and soil surface humidity for each pixel. Vegetation and meteorological characteristics being the model parameters and input variables, correspondingly, have been estimated by ground observations and thematic processing measurement data of scanning radiometers AVHRR/NOAA, SEVIRI/Meteosat-9, -10 (MSG-2, -3) and MSU-MR/Meteor-M № 2. Values of soil surface humidity has been calculated from remote sensing data of scatterometers ASCAT/MetOp-A, -B. The case study has been carried out for the territory of part of the agricultural Central Black Earth Region of European Russia with area of 227300 km2 located in the forest-steppe zone for years 2012-2015 vegetation seasons. The main objectives of the study have been: - to built estimates of precipitation, land surface temperatures (LST) and vegetation characteristics from MSU-MR measurement data using the refined technologies (including algorithms and programs) of thematic processing satellite information matured on AVHRR and SEVIRI data. All technologies have been adapted to the area of interest; - to investigate the possibility of utilizing satellite-derived estimates of values above in the LSM including verification of obtained estimates and development of procedure of their inputting into the model. From the AVHRR data there have been built the estimates of precipitation, three types of LST: land skin temperature Tsg, air temperature at a level of vegetation cover (taken for vegetation temperature) Ta and efficient radiation temperature Ts.eff, as well as land surface emissivity E, normalized difference vegetation index NDVI, vegetation cover fraction B, and leaf area index LAI. The SEVIRI-based retrievals have included precipitation, LST Tls and Ta, E at daylight and nighttime, LAI (daily), and B. From the MSU-MR data there have been retrieved values of all the same characteristics as from the AVHRR data. The MSU-MR-based daily and monthly sums of precipitation have been calculated using the developed earlier and modified Multi Threshold Method (MTM) intended for the cloud detection and identification of its types around the clock as well as allocation of precipitation zones and determination of instantaneous maximum rainfall intensities for each pixel at that the transition from assessing rainfall intensity to estimating their daily values is a key element of the MTM. Measurement data from 3 IR MSU-MR channels (3.8, 11 i 12 μm) as well as their differences have been used in the MTM as predictors. Controlling the correctness of the MSU-MR-derived rainfall estimates has been carried out when comparing with analogous AVHRR- and SEVIRI-based retrievals and with precipitation amounts measured at the agricultural meteorological station of the study region. Probability of rainfall zones determination from the MSU-MR data, to match against the actual ones, has been 75-85% as well as for the AVHRR and SEVIRI data. The time behaviors of satellite-derived and ground-measured daily and monthly precipitation sums for vegetation season and yeaŗ correspondingly, have been in good agreement with each other although the first ones have been smoother than the latter. Discrepancies have existed for a number of local maxima for which satellite-derived precipitation estimates have been less than ground-measured values. It may be due to the different spatial scales of areal satellite-derived and point ground-based estimates. Some spatial displacement of the satellite-determined rainfall maxima and minima regarding to ground-based data can be explained by the discrepancy between the cloud location on satellite images and in reality at high angles of the satellite sightings and considerable altitudes of the cloud tops. Reliability of MSU-MR-derived rainfall estimates at each time step obtained using the MTM has been verified by comparing their values determined from the MSU-MR, AVHRR and SEVIRI measurements and distributed over the study area with similar estimates obtained by interpolation of ground observation data. The MSU-MR-derived estimates of temperatures Tsg, Ts.eff, and Ta have been obtained using computational algorithm developed on the base of the MTM and matured on AVHRR and SEVIRI data for the region under investigation. Since the apparatus MSU-MR is similar to radiometer AVHRR, the developed methods of satellite estimating Tsg, Ts.eff, and Ta from AVHRR data could be easily transferred to the MSU-MR data. Comparison of the ground-measured and MSU-MR-, AVHRR- and SEVIRI-derived LSTs has shown that the differences between all the estimates for the vast majority of observation terms have not exceed the RMSE of these quantities built from the AVHRR data. The similar conclusion has been also made from the results of building the time behavior of the MSU-MR-derived value of LAI for vegetation season. Satellite-based estimates of precipitation, LST, LAI and B have been utilized in the model with the help of specially developed procedures of replacing these values determined from observations at agricultural meteorological stations by their satellite-derived values taking into account spatial heterogeneity of their fields. Adequacy of such replacement has been confirmed by the results of comparing modeled and ground-measured values of soil moisture content W and evapotranspiration Ev. Discrepancies between the modeled and ground-measured values of W and Ev have been in the range of 10-15 and 20-25 %, correspondingly. It may be considered as acceptable result. Resulted products of the model calculations using satellite data have been spatial fields of W, Ev, vertical sensible and latent heat fluxes and other water and heat regime characteristics for the region of interest over the year 2012-2015 vegetation seasons. Thus, there has been shown the possibility of utilizing MSU-MR/Meteor-M №2 data jointly with those of other satellites in the LSM to calculate characteristics of water and heat regimes for the area under consideration. Besides the first trial estimations of the soil surface moisture from ASCAT scatterometers data for the study region have been obtained for the years 2014-2015 vegetation seasons, their comparison has been performed with the results of modeling for several agricultural meteorological stations of the region that has been carried out utilizing ground-based and satellite data, specific requirements for the obtained information have been formulated. To date, estimates of surface moisture built from ASCAT data can be used for the selection of the model soil parameter values and the initial soil moisture conditions for the vegetation season.
Thakwalakwa, Chrissie M; Kuusipalo, Heli M; Maleta, Kenneth M; Phuka, John C; Ashorn, Per; Cheung, Yin Bun
2012-07-01
This study aimed to compare the nutritional intake values among 15-month-old rural Malawian children obtained by weighed food record (WFR) with those obtained by modified 24-hour recall (mod 24-HR), and to develop algorithm for adjusting mod 24-HR values so as to predict mean intake based on WFRs. The study participants were 169 15-month-old children who participated in a clinical trial. Food consumption on one day was observed and weighed (established criterion) by a research assistant to provide the estimates of energy and nutrient intakes. On the following day, another research assistant, blinded to the direct observation, conducted the structured interactive 24-hour recall (24-HR) interview (test method). Paired t-tests and scatter-plots were used to compare intake values of the two methods. The structured interactive 24-HR method tended to overestimate energy and nutrient intakes (each P < 0.001). The regression-through-the-origin method was used to develop adjustment algorithms. Results showed that multiplying the mean energy, protein, fat, iron, zinc and vitamin A intake estimates based on the test method by 0.86, 0.80, 0.68, 0.69, 0.72 and 0.76, respectively, provides an approximation of the mean values based on WFRs. © 2011 Blackwell Publishing Ltd.
Poblete-Echeverría, Carlos; Fuentes, Sigfredo; Ortega-Farias, Samuel; Gonzalez-Talice, Jaime; Yuri, Jose Antonio
2015-01-28
Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAI(D)), which was compared with LAI estimated by the proposed digital photography method (LAI(M)). Results showed that the LAI(M) was able to estimate LAI(D) with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (f(f)) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions.
Poblete-Echeverría, Carlos; Fuentes, Sigfredo; Ortega-Farias, Samuel; Gonzalez-Talice, Jaime; Yuri, Jose Antonio
2015-01-01
Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAID), which was compared with LAI estimated by the proposed digital photography method (LAIM). Results showed that the LAIM was able to estimate LAID with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (ff) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions. PMID:25635411
Moench, A.F.; Garabedian, Stephen P.; LeBlanc, Denis R.
2000-01-01
An aquifer test conducted in a sand and gravel, glacial outwash deposit on Cape Cod, Massachusetts was analyzed by means of a model for flow to a partially penetrating well in a homogeneous, anisotropic unconfined aquifer. The model is designed to account for all significant mechanisms expected to influence drawdown in observation piezometers and in the pumped well. In addition to the usual fluid-flow and storage processes, additional processes include effects of storage in the pumped well, storage in observation piezometers, effects of skin at the pumped-well screen, and effects of drainage from the zone above the water table. The aquifer was pumped at a rate of 320 gallons per minute for 72-hours and drawdown measurements were made in the pumped well and in 20 piezometers located at various distances from the pumped well and depths below the land surface. To facilitate the analysis, an automatic parameter estimation algorithm was used to obtain relevant unconfined aquifer parameters, including the saturated thickness and a set of empirical parameters that relate to gradual drainage from the unsaturated zone. Drainage from the unsaturated zone is treated in this paper as a finite series of exponential terms, each of which contains one empirical parameter that is to be determined. It was necessary to account for effects of gradual drainage from the unsaturated zone to obtain satisfactory agreement between measured and simulated drawdown, particularly in piezometers located near the water table. The commonly used assumption of instantaneous drainage from the unsaturated zone gives rise to large discrepancies between measured and predicted drawdown in the intermediate-time range and can result in inaccurate estimates of aquifer parameters when automatic parameter estimation procedures are used. The values of the estimated hydraulic parameters are consistent with estimates from prior studies and from what is known about the aquifer at the site. Effects of heterogeneity at the site were small as measured drawdowns in all piezometers and wells were very close to the simulated values for a homogeneous porous medium. The estimated values are: specific yield, 0.26; saturated thickness, 170 feet; horizontal hydraulic conductivity, 0.23 feet per minute; vertical hydraulic conductivity, 0.14 feet per minute; and specific storage, 1.3x10-5 per foot. It was found that drawdown in only a few piezometers strategically located at depth near the pumped well yielded parameter estimates close to the estimates obtained for the entire data set analyzed simultaneously. If the influence of gradual drainage from the unsaturated zone is not taken into account, specific yield is significantly underestimated even in these deep-seated piezometers. This helps to explain the low values of specific yield often reported for granular aquifers in the literature. If either the entire data set or only the drawdown in selected deep-seated piezometers was used, it was found unnecessary to conduct the test for the full 72-hours to obtain accurate estimates of the hydraulic parameters. For some piezometer groups, practically identical results would be obtained for an aquifer test conducted for only 8-hours. Drawdowns measured in the pumped well and piezometers at distant locations were diagnostic only of aquifer transmissivity.
An Improved Wavefront Control Algorithm for Large Space Telescopes
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Basinger, Scott A.; Redding, David C.
2008-01-01
Wavefront sensing and control is required throughout the mission lifecycle of large space telescopes such as James Webb Space Telescope (JWST). When an optic of such a telescope is controlled with both surface-deforming and rigid-body actuators, the sensitivity-matrix obtained from the exit pupil wavefront vector divided by the corresponding actuator command value can sometimes become singular due to difference in actuator types and in actuator command values. In this paper, we propose a simple approach for preventing a sensitivity-matrix from singularity. We also introduce a new "minimum-wavefront and optimal control compensator". It uses an optimal control gain matrix obtained by feeding back the actuator commands along with the measured or estimated wavefront phase information to the estimator, thus eliminating the actuator modes that are not observable in the wavefront sensing process.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
NASA Astrophysics Data System (ADS)
Yun, Dongfang; Protas, Bartosz
2018-02-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by analyzing the maximum growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical and supercritical regimes. Since solutions to this equation exhibit, respectively, globally well-posed behavior and finite-time blowup in these two regimes, this makes it a useful model to study the maximum instantaneous growth of enstrophy possible in these two distinct situations. First, we obtain estimates on the rates of growth and then show that these estimates are sharp up to numerical prefactors. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. We conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This indicates that the maximum enstrophy rate of growth changes smoothly as global well-posedness is lost when the fractional dissipation exponent attains supercritical values. In addition, nontrivial behavior is revealed for the maximum rate of growth of the fractional enstrophy obtained for small values of the fractional dissipation exponents. We also characterize the structure of the maximizers in different cases.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
Mbuthia, Jackson Mwenda; Rewe, Thomas Odiwuor; Kahi, Alexander Kigunzu
2015-02-01
This study estimated economic values for production traits (dressing percentage (DP), %; live weight for growers (LWg), kg; live weight for sows (LWs), kg) and functional traits (feed intake for growers (FEEDg), feed intake for sow (FEEDs), preweaning survival rate (PrSR), %; postweaning survival (PoSR), %; sow survival rate (SoSR), %, total number of piglets born (TNB) and farrowing interval (FI), days) under different smallholder pig production systems in Kenya. Economic values were estimated considering two production circumstances: fixed-herd and fixed-feed. Under the fixed-herd scenario, economic values were estimated assuming a situation where the herd cannot be increased due to other constraints apart from feed resources. The fixed-feed input scenario assumed that the herd size is restricted by limitation of feed resources available. In addition to the tradition profit model, a risk-rated bio-economic model was used to derive risk-rated economic values. This model accounted for imperfect knowledge concerning risk attitude of farmers and variance of input and output prices. Positive economic values obtained for traits DP, LWg, LWs, PoSR, PrSR, SoSR and TNB indicate that targeting them in improvement would positively impact profitability in pig breeding programmes. Under the fixed-feed basis, the risk-rated economic values for DP, LWg, LWs and SoSR were similar to those obtained under the fixed-herd situation. Accounting for risks in the EVs did not yield errors greater than ±50 % in all the production systems and basis of evaluation meaning there would be relatively little effect on the real genetic gain of a selection index. Therefore, both traditional and risk-rated models can be satisfactorily used to predict profitability in pig breeding programmes.
Application of the coherent anomaly method to percolation
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Takayasu, Hideki
1988-03-01
Applying the coherent anomaly method (CAM) to site percolation problems, we estimate the percolation threshold pc and critical exponents. We obtain pc=0.589, β=0.140, γ=2.426 on the two-dimensional square lattice. These values are in good agreement with the values already known. We also investigate higher-dimensional cases by this method.
Application of the Coherent Anomaly Method to Percolation
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Takayasu, Hideki
Applying the coherent anomaly method (CAM) to site percolation problems, we estimate the percolation threshold ϱc and critical exponents. We obtain pc = 0.589, Β=0.140, Γ = 2.426 on the two-dimensional square lattice. These values are in good agreement with the values already known. We also investigate higher-dimensional cases by this method.
Noninvasive microwave ablation zone radii estimation using x-ray CT image analysis.
Weiss, Noam; Goldberg, S Nahum; Nissenbaum, Yitzhak; Sosna, Jacob; Azhari, Haim
2016-08-01
The aims of this study were to noninvasively and automatically estimate both the radius of the ablated liver tissue and the radius encircling the treated zone, which also defines where the tissue is definitely untreated during a microwave (MW) thermal ablation procedure. Fourteen ex vivo bovine fresh liver specimens were ablated at 40 W using a 14 G microwave antenna, for durations of 3, 6, 8, and 10 min. The tissues were scanned every 5 s during the ablation using an x-ray CT scanner. In order to estimate the radius of the ablation zone, the acquired images were transformed into a polar presentation by displaying the Hounsfield units (HU) as a function of angle and radius. From this polar presentation, the average HU radial profile was analyzed at each time point and the ablation zone radius was estimated. In addition, textural analysis was applied to the original CT images. The proposed algorithm identified high entropy regions and estimated the treated zone radius per time. The estimated ablated zone radii as a function of treatment durations were compared, by means of correlation coefficient and root mean square error (RMSE) to gross pathology measurements taken immediately post-treatment from similarly ablated tissue. Both the estimated ablation radii and the treated zone radii demonstrated strong correlation with the measured gross pathology values (R(2) ≥ 0.89 and R(2) ≥ 0.86, respectively). The automated ablation radii estimation had an average discrepancy of less than 1 mm (RMSE = 0.65 mm) from the gross pathology measured values, while the treated zone radii showed a slight overestimation of approximately 1.5 mm (RMSE = 1.6 mm). Noninvasive monitoring of MW ablation using x-ray CT and image analysis is feasible. Automatic estimations of the ablation zone radius and the radius encompassing the treated zone that highly correlate with actual ablation measured values can be obtained. This technique can therefore potentially be used to obtain real time monitoring and improve the clinical outcome.
A testable model of earthquake probability based on changes in mean event size
NASA Astrophysics Data System (ADS)
Imoto, Masajiro
2003-02-01
We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
NASA Astrophysics Data System (ADS)
Raju, Subramanian; Saibaba, Saroja
2016-09-01
The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.
Development of a patient-specific dosimetry estimation system in nuclear medicine examination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, H. H.; Dong, S. L.; Yang, H. J.
2011-07-01
The purpose of this study is to develop a patient-specific dosimetry estimation system in nuclear medicine examination using a SimSET-based Monte Carlo code. We added a dose deposition routine to store the deposited energy of the photons during their flights in SimSET and developed a user-friendly interface for reading PET and CT images. Dose calculated on ORNL phantom was used to validate the accuracy of this system. The S values for {sup 99m}Tc, {sup 18}F and {sup 131}I obtained by the system were compared to those from the MCNP4C code and OLINDA. The ratios of S values computed by thismore » system to those obtained with OLINDA for various organs were ranged from 0.93 to 1.18, which are comparable to that obtained from MCNP4C code (0.94 to 1.20). The average ratios of S value were 0.99{+-}0.04, 1.03{+-}0.05, and 1.00{+-}0.07 for isotopes {sup 131}I, {sup 18}F, and {sup 99m}Tc, respectively. The simulation time of SimSET was two times faster than MCNP4C's for various isotopes. A 3D dose calculation was also performed on a patient data set with PET/CT examination using this system. Results from the patient data showed that the estimated S values using this system differed slightly from those of OLINDA for ORNL phantom. In conclusion, this system can generate patient-specific dose distribution and display the isodose curves on top of the anatomic structure through a friendly graphic user interface. It may also provide a useful tool to establish an appropriate dose-reduction strategy to patients in nuclear medicine environments. (authors)« less
A Secure Trust Establishment Scheme for Wireless Sensor Networks
Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob
2014-01-01
Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Sim, K S; Norhisham, S
2016-11-01
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Puncher, M; Zhang, W; Harrison, J D; Wakeford, R
2017-06-26
Assessments of risk to a specific population group resulting from internal exposure to a particular radionuclide can be used to assess the reliability of the appropriate International Commission on Radiological Protection (ICRP) dose coefficients used as a radiation protection device for the specified exposure pathway. An estimate of the uncertainty on the associated risk is important for informing judgments on reliability; a derived uncertainty factor, UF, is an estimate of the 95% probable geometric difference between the best risk estimate and the nominal risk and is a useful tool for making this assessment. This paper describes the application of parameter uncertainty analysis to quantify uncertainties resulting from internal exposures to radioiodine by members of the public, specifically 1, 10 and 20-year old females from the population of England and Wales. Best estimates of thyroid cancer incidence risk (lifetime attributable risk) are calculated for ingestion or inhalation of 129 I and 131 I, accounting for uncertainties in biokinetic model and cancer risk model parameter values. These estimates are compared with the equivalent ICRP derived nominal age-, sex- and population-averaged estimates of excess thyroid cancer incidence to obtain UFs. Derived UF values for ingestion or inhalation of 131 I for 1 year, 10-year and 20-year olds are around 28, 12 and 6, respectively, when compared with ICRP Publication 103 nominal values, and 9, 7 and 14, respectively, when compared with ICRP Publication 60 values. Broadly similar results were obtained for 129 I. The uncertainties on risk estimates are largely determined by uncertainties on risk model parameters rather than uncertainties on biokinetic model parameters. An examination of the sensitivity of the results to the risk models and populations used in the calculations show variations in the central estimates of risk of a factor of around 2-3. It is assumed that the direct proportionality of excess thyroid cancer risk and dose observed at low to moderate acute doses and incorporated in the risk models also applies to very small doses received at very low dose rates; the uncertainty in this assumption is considerable, but largely unquantifiable. The UF values illustrate the need for an informed approach to the use of ICRP dose and risk coefficients.
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Mcmaster, K. M.
1981-01-01
The methodology presented is a derivation of the utility owned solar electric systems model. The net present value of the system is determined by consideration of all financial benefits and costs including a specified return on investment. Life cycle costs, life cycle revenues, and residual system values are obtained. Break-even values of system parameters are estimated by setting the net present value to zero.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-11-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-01-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127
Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor
2013-01-01
In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.
Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing
2016-12-01
The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.
Crowdsourcing urban air temperatures through smartphone battery temperatures in São Paulo, Brazil
NASA Astrophysics Data System (ADS)
Droste, Arjan; Pape, Jan-Jaap; Overeem, Aart; Leijnse, Hidde; Steeneveld, Gert-Jan; Van Delden, Aarnout; Uijlenhoet, Remko
2017-04-01
Crowdsourcing as a method to obtain and apply vast datasets is rapidly becoming prominent in meteorology, especially for urban areas where traditional measurements are scarce. Earlier studies showed that smartphone battery temperature readings allow for estimating the daily and city-wide air temperature via a straightforward heat transfer model. This study advances these model estimations by studying spatially and temporally smaller scales. The accuracy of temperature retrievals as a function of the number of battery readings is also studied. An extensive dataset of over 10 million battery temperature readings is available for São Paulo (Brazil), for estimating hourly and daily air temperatures. The air temperature estimates are validated with air temperature measurements from a WMO station, an Urban Fluxnet site, and crowdsourced data from 7 hobby meteorologists' private weather stations. On a daily basis temperature estimates are good, and we show they improve by optimizing model parameters for neighbourhood scales as categorized in Local Climate Zones. Temperature differences between Local Climate Zones can be distinguished from smartphone battery temperatures. When validating the model for hourly temperature estimates, initial results are poor, but are vastly improved by using a diurnally varying parameter function in the heat transfer model rather than one fixed value for the entire day. The obtained results show the potential of large crowdsourced datasets in meteorological studies, and the value of smartphones as a measuring platform when routine observations are lacking.
Damughatla, Anirudh R; Raterman, Brian; Sharkey-Toppen, Travis; Jin, Ning; Simonetti, Orlando P; White, Richard D; Kolipaka, Arunark
2015-01-01
To determine the correlation in abdominal aortic stiffness obtained using magnetic resonance elastography (MRE) (μ(MRE)) and MRI-based pulse wave velocity (PWV) shear stiffness (μ(PWV)) estimates in normal volunteers of varying age, and also to determine the correlation between μ(MRE) and μ(PWV). In vivo aortic MRE and MRI were performed on 21 healthy volunteers with ages ranging from 18 to 65 years to obtain wave and velocity data along the long axis of the abdominal aorta. The MRE wave images were analyzed to obtain mean stiffness and the phase contrast images were analyzed to obtain PWV measurements and indirectly estimate stiffness values from the Moens-Korteweg equation. Both μ(MRE) and μ(PWV) measurements increased with age, demonstrating linear correlations with R(2) values of 0.81 and 0.67, respectively. Significant difference (P ≤ 0.001) in mean μ(MRE) and μ(PWV) between young and old healthy volunteers was also observed. Furthermore, a poor linear correlation of R(2) value of 0.43 was determined between μ(MRE) and μ(PWV) in the initial pool of volunteers. The results of this study indicate linear correlations between μ(MRE) and μ(PWV) with normal aging of the abdominal aorta. Significant differences in mean μ(MRE) and μ(PWV) between young and old healthy volunteers were observed. © 2013 Wiley Periodicals, Inc.
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
Choice of Reference Serum Creatinine in Defining AKI
Siew, Edward D.; Matheny, Michael E.
2015-01-01
Background/Aims The study of acute kidney injury (AKI) has expanded with the increasing availability of electronic health records and the use of standardized definitions. Understanding the impact of AKI between settings is limited by heterogeneity in the selection of reference creatinine to anchor the definition of AKI. In this mini-review, we discuss different approaches used to select reference creatinine and their relative merits and limitations. Methods We reviewed the literature to obtain representative examples of published baseline creatinine definitions when pre-hospital data were not available, as well as literature evaluating estimation of baseline renal function, using Pubmed and reference back-tracing within known works. Results 1) Prehospital creatinine values are useful in determining reference creatinine, and in high-risk populations, the mean outpatient serum creatinine value 7-365 days before hospitalization closely approximates nephrology adjudication, 2) in patients without pre-hospital data, the eGFR 75 approach does not reliably estimate true AKI incidence in most at-risk populations 3) using the lowest inpatient serum creatinine may be reasonable, especially in those with preserved kidney function, but may generously estimate AKI incidence and severity and miss community-acquired AKI that does not fully resolve, 4) using more specific definitions of AKI (e.g. KIDGO Stage 2 and 3) may help to reduce the effects of misclassification when using surrogate values, and 5) leveraging available clinical data may help refine the estimate of reference creatinine. Conclusions Choosing reference creatinine for AKI calculation is important for AKI classification and study interpretation. We recommend obtaining data on pre-hospital kidney function, wherever possible. In studies where surrogate estimates are used, transparency in how they are applied and discussion that informs the reader of potential biases should be provided. Further work to refine the estimation of reference creatinine is needed. PMID:26332325
Falling weight deflectometer for estimating subgrade resilient moduli.
DOT National Transportation Integrated Search
2003-12-01
Subgrade soil characterization expressed in terms of resilient modulus, MR, has become crucial for pavement design. For : new pavement design, MR values are generally obtained by conducting repeated load triaxial tests on reconstituted/undisturbed : ...
Armah, Seth M
2016-06-01
The fractional zinc absorption values used in the current Dietary Reference Intakes (DRIs) for zinc were based on data from published studies. However, the inhibitory effect of phytate was underestimated because of the low phytate content of the diets in the studies used. The objective of this study was to estimate the fractional absorption of dietary zinc from the US diet by using 2 published algorithms. Nutrient intake data were obtained from the NHANES 2009-2010 and the corresponding Food Patterns Equivalents Database. Data were analyzed with the use of R software by taking into account the complex survey design. The International Zinc Nutrition Consultative Group (IZiNCG; Brown et al. Food Nutr Bull 2004;25:S99-203) and Miller et al. (Br J Nutr 2013;109:695-700) models were used to estimate zinc absorption. Geometric means (95% CIs) of zinc absorption for all subjects were 30.1% (29.9%, 30.2%) or 31.3% (30.9%, 31.6%) with the use of the IZiNCG model and Miller et al. model, respectively. For men, women, and adolescents, absorption values obtained in this study with the use of the 2 models were 27.2%, 31.4%, and 30.1%, respectively, for the IZiNCG model and 28.0%, 33.0%, and 31.6%, respectively, for the Miller et al. model, compared with the 41%, 48%, and 40%, respectively, used in the current DRIs. For preadolescents, estimated absorption values (31.1% and 32.8% for the IZiNCG model and Miller et al. model, respectively) compare well with the conservative estimate of 30% used in the DRIs. When the new estimates of zinc absorption were applied to the current DRI values for men and women, the results suggest that the Estimated Average Requirement (EAR) and RDA for these groups need to be increased by nearly one-half of the current values in order to meet their requirements for absorbed zinc. These data suggest that zinc absorption is overestimated for men, women, and adolescents in the current DRI. Upward adjustments of the DRI for these groups are recommended. © 2016 American Society for Nutrition.
Economics evaluation for on-site pyrolysis of kraft lignin to value-added chemicals.
Farag, Sherif; Chaouki, Jamal
2015-01-01
This work is part of a series of investigations on pyrolysis of lignin. After obtaining the necessary information regarding the quantity and quality of the obtained products, a first step economics evaluation for converting lignin into chemicals was essential. To accomplish this aim, a pyrolysis plant with a 50t/d capacity was designed, and the total capital investment and operating costs were estimated. Next, the minimal selling price of the obtained dry oil was calculated and the effect of crucial variables on the estimated price was examined. The key result indicates the estimated selling price would not compete with the price of the chemicals that are fossil fuel based, which is primarily due to the high cost of the feedstock. To overcome this challenge, different scenarios for reducing the selling price of the obtained oil, which consequently is helping by taking a place among the fossil fuel based chemicals, were discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Royalty, Anne Beeson
2008-01-01
In recent years the cost of health insurance has been increasing much faster than wages. In the face of these rising costs, many employers will have to make difficult decisions about whether to cut back health benefits or to compensate workers with lower wages or lower wage growth. In this paper, we ask the question, "Which do workers value more -- one additional dollar's worth of health benefits or one more dollar in their pockets?" Using a new approach to obtaining estimates of insured workers' marginal valuation of health benefits this paper estimates how much, on average, employees value the marginal dollar paid by employers for their workers' health insurance. We find that insured workers value the marginal health premium dollar at significantly less than the marginal wage dollar. However, workers value insurance generosity very highly. The marginal dollar spent on health insurance that adds an additional dollar's worth of observable dimensions of plan generosity, such as lower deductibles or coverage of additional services, is valued at significantly more than one dollar.
NASA Astrophysics Data System (ADS)
Yoshida, Keisuke; Saito, Tatsuhiko; Urata, Yumi; Asano, Youichi; Hasegawa, Akira
2017-12-01
In this study, we investigated temporal variations in stress drop and b-value in the earthquake swarm that occurred at the Yamagata-Fukushima border, NE Japan, after the 2011 Tohoku-Oki earthquake. In this swarm, frictional strengths were estimated to have changed with time due to fluid diffusion. We first estimated the source spectra for 1,800 earthquakes with 2.0 ≤ MJMA < 3.0, by correcting the site-amplification and attenuation effects determined using both S waves and coda waves. We then determined corner frequency assuming the omega-square model and estimated stress drop for 1,693 earthquakes. We found that the estimated stress drops tended to have values of 1-4 MPa and that stress drops significantly changed with time. In particular, the estimated stress drops were very small at the beginning, and increased with time for 50 days. Similar temporal changes were obtained for b-value; the b-value was very high (b 2) at the beginning, and decreased with time, becoming approximately constant (b 1) after 50 days. Patterns of temporal changes in stress drop and b-value were similar to the patterns for frictional strength and earthquake occurrence rate, suggesting that the change in frictional strength due to migrating fluid not only triggered the swarm activity but also affected earthquake and seismicity characteristics. The estimated high Q-1 value, as well as the hypocenter migration, supports the presence of fluid, and its role in the generation and physical characteristics of the swarm.
Sensitivity to experimental data of pollutant site mean concentration in stormwater runoff.
Mourad, M; Bertrand-Krajewski, J L; Chebbo, G
2005-01-01
Urban wet weather discharges are known to be a great source of pollutants for receiving waters, which protection requires the estimation of long-term discharged pollutant loads. Pollutant loads can be estimated by multiplying a site mean concentration (SMC) by the total runoff volume during a given period of time. The estimation of the SMC value as a weighted mean value with event runoff volumes as weights is affected by uncertainties due to the variability of event mean concentrations and to the number of events used. This study carried out on 13 catchments gives orders of magnitude of these uncertainties and shows the limitations of usual practices using few measured events. The results obtained show that it is not possible to propose a standard minimal number of events to be measured on any catchment in order to evaluate the SMC value with a given uncertainty.
Extrinsic local regression on manifold-valued data
Lin, Lizhen; St Thomas, Brian; Zhu, Hongtu; Dunson, David B.
2017-01-01
We propose an extrinsic regression framework for modeling data with manifold valued responses and Euclidean predictors. Regression with manifold responses has wide applications in shape analysis, neuroscience, medical imaging and many other areas. Our approach embeds the manifold where the responses lie onto a higher dimensional Euclidean space, obtains a local regression estimate in that space, and then projects this estimate back onto the image of the manifold. Outside the regression setting both intrinsic and extrinsic approaches have been proposed for modeling i.i.d manifold-valued data. However, to our knowledge our work is the first to take an extrinsic approach to the regression problem. The proposed extrinsic regression framework is general, computationally efficient and theoretically appealing. Asymptotic distributions and convergence rates of the extrinsic regression estimates are derived and a large class of examples are considered indicating the wide applicability of our approach. PMID:29225385
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Analyzing Small Signal Stability of Power System based on Online Data by Use of SMES
NASA Astrophysics Data System (ADS)
Ishikawa, Hiroyuki; Shirai, Yasuyuki; Nitta, Tanzo; Shibata, Katsuhiko
The purpose of this study is to estimate eigen-values and eigen-vectors of a power system from on-line data to evaluate the power system stability. Power system responses due to the small power modulation of known pattern from SMES (Superconducting Magnetic Energy Storage) were analyzed, and the transfer functions between the power modulation and power oscillations of generators were obtained. Eigen-values and eigen-vectors were estimated from the transfer functions. Experiments were carried out by use of a model SMES and Advanced Power System Analyzer (APSA), which is an analogue type power system simulator of Kansai Electric Power Company Inc., Japan. Changes in system condition were observed by the estimated eigen-values and eigen-vectors. Result agreed well with the resent report and digital simulation. This method gives a new application for SMES, which will be installed for improving electric power quality.
NASA Astrophysics Data System (ADS)
Greffrath, Fabian; Prieler, Robert; Telle, Rainer
2014-11-01
A new method for the experimental estimation of radiant heat emittance at high temperatures has been developed which involves aero-acoustic levitation of samples, laser heating and contactless temperature measurement. Radiant heat emittance values are determined from the time dependent development of the sample temperature which requires analysis of both the radiant and convective heat transfer towards the surroundings by means of fluid dynamics calculations. First results for the emittance of a corundum sample obtained with this method are presented in this article and found in good agreement with literature values.
Mapping apparent stress and energy radiation over fault zones of major earthquakes
McGarr, A.; Fletcher, Joe B.
2002-01-01
Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.
The color temperature of (2060) Chiron: A warm and small nucleus
NASA Technical Reports Server (NTRS)
Campins, H.; Telesco, C. M.; Osip, D. J.; Rieke, G. H.; Rieke, M. J.; Schulz, B.
1994-01-01
We present three sets of thermal-infrared observations of (2060) Chiron, obtained in 1991, 1993, and 1994. These observations allow the first estimates of the color temperature of Chiron as well as refined estimates of the radius and albedo of its nucleus. 10/20 micrometer color temperatures of 126(sub -6 sup +11) and 137(sub -9 sup +14) K are obtained from the 1993 and 1994 observations, respectively. These temperatures are consistent with the Standard Thermal Model (STM; Lebofsky & Spencer, 1989), but significantly higher than those predicted by the Isothermal Latitude Model. Our estimates of Chiron's radius based on the STM are in agreement with each other, with the observations of Lebofsky et al. (1984), and with recent occultation results (Buie et al., (1993). We obtained values for the radius of 74 +/- 11 km in 1991, 88 +/- 10 and 104 +/- 10 km in 1993, and, 94 +/- 6 and 91 +/- 13 km in 1994.
Estimating forest biomass and volume using airborne laser data
NASA Technical Reports Server (NTRS)
Nelson, Ross; Krabill, William; Tonelli, John
1988-01-01
An airborne pulsed laser system was used to obtain canopy height data over a southern pine forest in Georgia in order to predict ground-measured forest biomass and timber volume. Although biomass and volume estimates obtained from the laser data were variable when compared with the corresponding ground measurements site by site, the present models are found to predict mean total tree volume within 2.6 percent of the ground value, and mean biomass within 2.0 percent. The results indicate that species stratification did not consistently improve regression relationships for four southern pine species.
Weather adjustment using seemingly unrelated regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noll, T.A.
1995-05-01
Seemingly unrelated regression (SUR) is a system estimation technique that accounts for time-contemporaneous correlation between individual equations within a system of equations. SUR is suited to weather adjustment estimations when the estimation is: (1) composed of a system of equations and (2) the system of equations represents either different weather stations, different sales sectors or a combination of different weather stations and different sales sectors. SUR utilizes the cross-equation error values to develop more accurate estimates of the system coefficients than are obtained using ordinary least-squares (OLS) estimation. SUR estimates can be generated using a variety of statistical software packagesmore » including MicroTSP and SAS.« less
Optimal estimation of the optomechanical coupling strength
NASA Astrophysics Data System (ADS)
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jen, M; Yan, F; Tseng, Y
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less
NASA Astrophysics Data System (ADS)
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
41 CFR 102-75.1275 - Does a requesting agency have to pay for excess real property?
Code of Federal Regulations, 2010 CFR
2010-07-01
... agency have to pay for excess real property? Yes. GSA is required by law to obtain full fair market value... interest, will promptly provide each interested landholding agency with an estimate of fair market value... amended by Pub. L. 92-432. (b) Wildlife Conservation under Pub. L. 80-537. (c) Federal Correctional...
Effect of wear on the burst strength of l-80 steel casing
NASA Astrophysics Data System (ADS)
Irawan, S.; Bharadwaj, A. M.; Temesgen, B.; Karuppanan, S.; Abdullah, M. Z. B.
2015-12-01
Casing wear has recently become one of the areas of research interest in the oil and gas industry especially in extended reach well drilling. The burst strength of a worn out casing is one of the significantly affected mechanical properties and is yet an area where less research is done The most commonly used equations to calculate the resulting burst strength after wear are Barlow, the initial yield burst, the full yield burst and the rupture burst equations. The objective of this study was to estimate casing burst strength after wear through Finite Element Analysis (FEA). It included calculation and comparison of the different theoretical bursts pressures with the simulation results along with effect of different wear shapes on L-80 casing material. The von Misses stress was used in the estimation of the burst pressure. The result obtained shows that the casing burst strength decreases as the wear percentage increases. Moreover, the burst strength value of the casing obtained from the FEA has a higher value compared to the theoretical burst strength values. Casing with crescent shaped wear give the highest burst strength value when simulated under nonlinear analysis.
Joint groupwise registration and ADC estimation in the liver using a B-value weighted metric.
Sanz-Estébanez, Santiago; Rabanillo-Viloria, Iñaki; Royuela-Del-Val, Javier; Aja-Fernández, Santiago; Alberola-López, Carlos
2018-02-01
The purpose of this work is to develop a groupwise elastic multimodal registration algorithm for robust ADC estimation in the liver on multiple breath hold diffusion weighted images. We introduce a joint formulation to simultaneously solve both the registration and the estimation problems. In order to avoid non-reliable transformations and undesirable noise amplification, we have included appropriate smoothness constraints for both problems. Our metric incorporates the ADC estimation residuals, which are inversely weighted according to the signal content in each diffusion weighted image. Results show that the joint formulation provides a statistically significant improvement in the accuracy of the ADC estimates. Reproducibility has also been measured on real data in terms of the distribution of ADC differences obtained from different b-values subsets. The proposed algorithm is able to effectively deal with both the presence of motion and the geometric distortions, increasing accuracy and reproducibility in diffusion parameters estimation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Global estimate of net annual carbon flow to phenylpropanoid metabolism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walton, A.B.; Norman, E.G.; Turpin, D.H.
1993-05-01
The steady increase in the concentration of CO[sub 2] in the atmosphere is the focus of renewed interest in the global carbon cycle. Current research is centered upon modeling the effects of the increasing CO[sub 2] concentrations, and thus global warning, on global plant homeostasis. It has been estimated that the annual net primary production (NPP) values for terrestrial and oceanic biomes are 59.9 and 35 Pg C-yr[sup [minus]1], respectively (Melillo et al., 1990). Based on these NPP values, we have estimated the annual C flow to phenlpropanoid metabolism. In our calculation, lignin was used as a surrogate for phenylpropanoidmore » compounds, as lignin is the second most abundant plant polymer. This approach means that our estimate defines the lower limit of C flow to phenylpropanoid metabolism. Each biome was considered separately to determine the percent of the NPP which was directed to the biosynthesis of leaves, stems/branches, and roots. From published values of the lignin content of these organs, the total amount of C directed to the biosynthesis of lignin in each biome was determined. This was used to obtain a global value. Implications of these estimates will be discussed with reference to plant carbon and nitrogen metabolism.« less
Technical note: Design flood under hydrological uncertainty
NASA Astrophysics Data System (ADS)
Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco
2017-07-01
Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.
Jackson, R. D.; Moran, M.S.; Gay, L.W.; Raymond, L.H.
1987-01-01
Airborne measurements of reflected solar and emitted thermal radiation were combined with ground-based measurements of incoming solar radiation, air temperature, windspeed, and vapor pressure to calculate instantaneous evaporation (LE) rates using a form of the Penman equation. Estimates of evaporation over cotton, wheat, and alfalfa fields were obtained on 5 days during a one-year period. A Bowen ratio apparatus, employed simultaneously, provided ground-based measurements of evaporation. Comparison of the airborne and ground techniques showed good agreement, with the greatest difference being about 12% for the instantaneous values. Estimates of daily (24 h) evaporation were made from the instantaneous data. On three of the five days, the difference between the two techniques was less than 8%, with the greatest difference being 25%. The results demonstrate that airborne remote sensing techniques can be used to obtain spatially distributed values of evaporation over agricultural fields. ?? 1987 Springer-Verlag.
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
Evaluation of Rotor Structural and Aerodynamic Loads using Measured Blade Properties
NASA Technical Reports Server (NTRS)
Jung, Sung N.; You, Young-Hyun; Lau, Benton H.; Johnson, Wayne; Lim, Joon W.
2012-01-01
The structural properties of Higher harmonic Aeroacoustic Rotor Test (HART I) blades have been measured using the original set of blades tested in the wind tunnel in 1994. A comprehensive rotor dynamics analysis is performed to address the effect of the measured blade properties on airloads, blade motions, and structural loads of the rotor. The measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. The measured properties are correlated against the estimated values obtained initially by the manufacturer of the blades. The previously estimated blade properties showed consistently higher stiffnesses, up to 30% for the flap bending in the blade inboard root section. The measured offset between the center of gravity and the elastic axis is larger by about 5% chord length, as compared with the estimated value. The comprehensive rotor dynamics analysis was carried out using the measured blade property set for HART I rotor with and without HHC (Higher Harmonic Control) pitch inputs. A significant improvement on blade motions and structural loads is obtained with the measured blade properties.
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
Estimation and analysis of interannual variations in tropical oceanic rainfall using data from SSM/I
NASA Technical Reports Server (NTRS)
Berg, Wesley
1992-01-01
Rainfall over tropical ocean regions, particularly in the tropical Pacific, is estimated using Special Sensor Microwave/Imager (SSM/I) data. Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-Matrix algorithm. Comparisons with other satellite techniques are made to validate the SSM/I results for the tropical Pacific. The correlation coefficients are relatively high for the three data sets investigated, especially for the annual case.
Moments of inertia of relativistic magnetized stars
NASA Astrophysics Data System (ADS)
Konno, K.
2001-06-01
We consider principal moments of inertia of axisymmetric, magnetically deformed stars in the context of general relativity. The general expression for the moment of inertia with respect to the symmetric axis is obtained. The numerical estimates are derived for several polytropic stellar models. We find that the values of the principal moments of inertia are modified by a factor of 2 at most from Newtonian estimates.
Effect of pregnancy on the genetic evaluation of dairy cattle.
Pereira, R J; Santana, M L; Bignardi, A B; Verneque, R S; El Faro, L; Albuquerque, L G
2011-09-26
We investigated the effect of stage of pregnancy on estimates of breeding values for milk yield and milk persistency in Gyr and Holstein dairy cattle in Brazil. Test-day milk yield records were analyzed using random regression models with or without the effect of pregnancy. Models were compared using residual variances, heritabilities, rank correlations of estimated breeding values of bulls and cows, and number of nonpregnant cows in the top 200 for milk yield and milk persistency. The estimates of residual variance and heritabilities obtained with the models with or without the effect of pregnancy were similar for the two breeds. Inclusion of the effect of pregnancy in genetic evaluation models for these populations did not affect the ranking of cows and sires based on their predicted breeding values for 305-day cumulative milk yield. In contrast, when we examined persistency of milk yield, lack of adjustment for the effect of pregnancy overestimated breeding values of nonpregnant cows and cows with a long days open period and underestimated breeding values of cows with a short days open period. We recommend that models include the effect of days of pregnancy for estimation of adjustment factors for the effect of pregnancy in genetic evaluations of Dairy Gyr and Holstein cattle.
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-04-27
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.
NASA Astrophysics Data System (ADS)
Franco, Renato A. M.; Hernandez, Fernando B. T.; Teixeira, Antonio H. C.
2014-10-01
Water productivity (WP) of various classes of soil usage from watersheds was estimated using the SAFER - Simple Algorithm For Evapotranspiration Retrieving - algorithm and the Monteith equation to estimate the parameters of biomass production (BIO). Monteith's equation is used to quantify the absorbed photosynthetically active radiation (APAR) and Actual Evapotranspiration (ET) was estimated with the SAFER algorithm. The objective of the research is to analyze the spatial-temporal water productivity in watersheds with different uses and soil occupation during the period from 1996 to 2010, in conditions of drought and using the Monteith model to estimate the production of BIO and using the SAFER model for ET. Results indicated an increase of 153.2% in ET value during the period 1997-2010, showing that the irrigated areas were responsible for this increase in ET values. In September 2000, image of day of year (DOY) 210 showed high values of BIO, with averages of 80.67 kg ha-1d-1. In the year 2010 (DOY:177), the mean value of BIO was 62.90 kg ha-1d-1, with an irrigated area with a maximum value of 227.5 kg ha-1d-1. The highest incremental values of BIO is verified from the start of irrigated areas equal to the value of ET, because there is a relationship between BIO and ET. The maximum water productivity (WP) value occurred in June/2001, with 3,08 kg m-3, the second highest value was in 2010 (DOY:177), with a value of 2,97 kg m-3. Irrigated agriculture show the highest WP value, with maximum value of 6.7 kg m-3. The lowest WP was obtained for DOY 267, because of the dry season with condition of low soil moisture.
Piñero, David P.; Camps, Vicente J.; Ramón, María L.; Mateo, Verónica; Pérez-Cambrodí, Rafael J.
2015-01-01
AIM To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). METHODS Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). RESULTS PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. CONCLUSION Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors. PMID:26085998
Piñero, David P; Camps, Vicente J; Ramón, María L; Mateo, Verónica; Pérez-Cambrodí, Rafael J
2015-01-01
To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors.
Financial market dynamics: superdiffusive or not?
NASA Astrophysics Data System (ADS)
Devi, Sandhya
2017-08-01
The behavior of stock market returns over a period of 1-60 d has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating that the stock returns do not follow a random walk model. In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from 1 d returns, lie in the range 1.4-1.65. The estimated inverse mean square deviations (beta) show a power law behavior in time with exponent values between -0.91 and -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviations (betaFP) from the FP model follow a power law with exponent values between -1.25 and -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding betaFP do not follow a power law and become stationary after certain characteristic times that depend on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit.
1982-04-23
monolayer A + -t -10 2 where B = 4.01 x 10 cm A = 0.128 and = o/s The data of Rehfeld (17) for the adsorption of sodium dodecyl sulfate has also been...estimates of Aerosol OT and sodium dodecyl sulfate saturation adsorption at the inter- face can be made when the ¢ of the oil-water system and the i of the...Aerosol OT. For sodium dodecyl sulfate , a value of 37.6A2 would be obtained, slightly lower than the value of 43.9A2 obtained at the air surfactant
NASA Astrophysics Data System (ADS)
Bárdossy, András; Pegram, Geoffrey
2017-01-01
The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this paper we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the paper is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to unsampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the subdaily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. Additionally a statistical procedure not based on a matching day by day correction is tested. In this last procedure as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving a small number of L days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these L day maxima is first iterpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest L radar based days. Of course, the timings of radar and gauge maxima can be different, so the method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable.
Axial-vector form factors of the nucleon from lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Rajan; Jang, Yong-Chull; Lin, Huey-Wen
In this paper, we present results for the form factors of the isovector axial vector current in the nucleon state using large scale simulations of lattice QCD. The calculations were done using eight ensembles of gauge configurations generated by the MILC collaboration using the HISQ action with 2 + 1 + 1 dynamical flavors. These ensembles span three lattice spacings a ≈ 0.06 , 0.09, and 0.12 fm and light-quark masses corresponding to the pion masses M π ≈ 135, 225, and 310 MeV. High-statistics estimates allow us to quantify systematic uncertainties in the extraction of G A (Q 2)more » and the induced pseudoscalar form factor G P(Q 2) . We perform a simultaneous extrapolation in the lattice spacing, lattice volume and light-quark masses of the axial charge radius r A data to obtain physical estimates. Using the dipole ansatz to fit the Q 2 behavior we obtain r A | dipole = 0.49(3) fm , which corresponds to M A = 1.39(9) GeV , and is consistent with M A = 1.35(17) GeV obtained by the miniBooNE collaboration. The estimate obtained using the z -expansion is r A | z - expansion = 0.46(6) fm, and the combined result is r A | combined = 0.48(4) fm. Analysis of the induced pseudoscalar form factor G P (Q 2) yields low estimates for g* P and g πNN compared to their phenomenological values. To understand these, we analyze the partially conserved axial current (PCAC) relation by also calculating the pseudoscalar form factor. Lastly, we find that these low values are due to large deviations in the PCAC relation between the three form factors, and in the pion-pole dominance hypothesis.« less
Axial-vector form factors of the nucleon from lattice QCD
Gupta, Rajan; Jang, Yong-Chull; Lin, Huey-Wen; ...
2017-12-04
In this paper, we present results for the form factors of the isovector axial vector current in the nucleon state using large scale simulations of lattice QCD. The calculations were done using eight ensembles of gauge configurations generated by the MILC collaboration using the HISQ action with 2 + 1 + 1 dynamical flavors. These ensembles span three lattice spacings a ≈ 0.06 , 0.09, and 0.12 fm and light-quark masses corresponding to the pion masses M π ≈ 135, 225, and 310 MeV. High-statistics estimates allow us to quantify systematic uncertainties in the extraction of G A (Q 2)more » and the induced pseudoscalar form factor G P(Q 2) . We perform a simultaneous extrapolation in the lattice spacing, lattice volume and light-quark masses of the axial charge radius r A data to obtain physical estimates. Using the dipole ansatz to fit the Q 2 behavior we obtain r A | dipole = 0.49(3) fm , which corresponds to M A = 1.39(9) GeV , and is consistent with M A = 1.35(17) GeV obtained by the miniBooNE collaboration. The estimate obtained using the z -expansion is r A | z - expansion = 0.46(6) fm, and the combined result is r A | combined = 0.48(4) fm. Analysis of the induced pseudoscalar form factor G P (Q 2) yields low estimates for g* P and g πNN compared to their phenomenological values. To understand these, we analyze the partially conserved axial current (PCAC) relation by also calculating the pseudoscalar form factor. Lastly, we find that these low values are due to large deviations in the PCAC relation between the three form factors, and in the pion-pole dominance hypothesis.« less
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-06-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-03-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
Frick, K D; Keuffel, E L; Bowman, R J
2001-07-01
Untreated trichiasis can lead to corneal opacity. Surgery to prevent the eyelashes from rubbing against the cornea is available, but many individuals with trichiasis never undergo the operation. This study estimates the cost of illness of untreated trichiasis and the willingness to pay for surgery and compares them with the actual cost of providing surgery. The cost of illness estimate is based on trichiasis patient demographics. Data on the implicit price of obtaining surgery and surgical utilization in a matched pair randomized trial are used to infer individual willingness to pay for trichiasis surgery. Patients in the study paid nothing out-of-pocket for surgery; the price of obtaining surgery is the value of the individual's time needed for travel and surgery plus the price of public transportation. The cost of producing surgery was calculated from project records. All monetary figures are reported in 1998 US dollars. The average cost of untreated trichiasis, or the net present value of life-time lost economic productivity, was $89. Individuals facing a lower cost were more likely to undergo an operation; the inferred average willingness to pay was $1.43 (SD 0.244). Surgery cost $6.13 to provide, including $0.86 for transportation to the village. Whether the value of trichiasis surgery exceeds the cost in The Gambia depends on how the value is measured. Individuals are willing to use only limited resources to obtain surgery even though lifetime economic productivity may increase substantially. All three economic measures can be used to inform policy.
NASA Astrophysics Data System (ADS)
Namulema, Mary Jude
2016-04-01
This study examined the relevance of economic valuation of wetlands in Uganda. A case study was done on Kiyanja-Kaku wetland in Lwengo District in Central Uganda using a semi-structured survey. Three objectives were examined i.e.: (i) To identify wetland ecosystem services in Uganda (ii) To identify the economic valuation methods appropriate for wetlands in Uganda (iii) To value clean water obtained from Kiyanja-Kaku wetland. The wetland ecosystem services were identified as provisioning, regulating, habitat, cultural and amenities services. The community had knowledge about 17 out of the 22 services as given by TEEB (2010). The economic valuation methods identified were, market price, efficiency price, travel cost, contingent valuation, hedonic pricing, and production function and benefit transfer methods. These were appropriate for valuation of wetlands in Uganda but only three methods i.e. market price, contingent valuation and productivity methods have been applied by researchers in Uganda so far. The economic value of clean water from Kiyanja-Kaku wetland to the nearby community was established by using the market price of clean water the National water and Sewerage Corporation charges for the water in Uganda to obtain the low value and the market price of water from the survey was used to obtain the high value. The estimated economic value of clean water service for a household ranges from UGX. 612174 to 4054733 (US 168.0-1095.0). The estimated economic value of clean water service from Kiyanja-Kaku wetland to the entire community ranges from UGX. 2,732,133,000.0 to 18,096,274,000.0 (US 775,228.0-4,885,994.0).
NASA Astrophysics Data System (ADS)
Khwaja, Tariq S.; Mazhar, Mohsin Ali; Niazi, Haris Khan; Reza, Syed Azer
2017-06-01
In this paper, we present the design of a proposed optical rangefinder to determine the distance of a semi-reflective target from the sensor module. The sensor module deploys a simple Tunable Focus Lens (TFL), a Laser Source (LS) with a Gaussian Beam profile and a digital beam profiler/imager to achieve its desired operation. We show that, owing to the nature of existing measurement methodologies, previous attempts to use a simple TFL in prior art to estimate target distance mostly deliver "one-shot" distance measurement estimates instead of obtaining and using a larger dataset which can significantly reduce the effect of some largely incorrect individual data points on the final distance estimate. Using a measurement dataset and calculating averages also helps smooth out measurement errors in individual data points through effectively low-pass filtering unexpectedly odd measurement offsets in individual data points. In this paper, we show that a simple setup deploying an LS, a TFL and a beam profiler or imager is capable of delivering an entire measurement dataset thus effectively mitigating the effects on measurement accuracy which are associated with "one-shot" measurement techniques. The technique we propose allows a Gaussian Beam from an LS to pass through the TFL. Tuning the focal length of the TFL results in altering the spot size of the beam at the beam imager plane. Recording these different spot radii at the plane of the beam profiler for each unique setting of the TFL provides us with a means to use this measurement dataset to obtain a significantly improved estimate of the target distance as opposed to relying on a single measurement. We show that an iterative least-squares curve-fit on the recorded data allows us to estimate distances of remote objects very precisely. We also show that using some basic ray-optics-based approximations, we also obtain an initial seed value for distance estimate and subsequently use this value to obtain a more precise estimate through an iterative residual reduction in the least-squares sense. In our experiments, we use a MEMS-based Digital Micro-mirror Device (DMD) as a beam imager/profiler as it delivers an accurate estimate of a Gaussian Beam profile. The proposed method, its working and the distance estimation methodology are discussed in detail. For a proof-of-concept, we back our claims with initial experimental results.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Ito, Tetsuya; Fukawa, Kazuo; Kamikawa, Mai; Nikaidou, Satoshi; Taniguchi, Masaaki; Arakawa, Aisaku; Tanaka, Genki; Mikawa, Satoshi; Furukawa, Tsutomu; Hirose, Kensuke
2018-01-01
Daily feed intake (DFI) is an important consideration for improving feed efficiency, but measurements using electronic feeder systems contain many missing and incorrect values. Therefore, we evaluated three methods for correcting missing DFI data (quadratic, orthogonal polynomial, and locally weighted (Loess) regression equations) and assessed the effects of these missing values on the genetic parameters and the estimated breeding values (EBV) for feeding traits. DFI records were obtained from 1622 Duroc pigs, comprising 902 individuals without missing DFI and 720 individuals with missing DFI. The Loess equation was the most suitable method for correcting the missing DFI values in 5-50% randomly deleted datasets among the three equations. Both variance components and heritability for the average DFI (ADFI) did not change because of the missing DFI proportion and Loess correction. In terms of rank correlation and information criteria, Loess correction improved the accuracy of EBV for ADFI compared to randomly deleted cases. These findings indicate that the Loess equation is useful for correcting missing DFI values for individual pigs and that the correction of missing DFI values could be effective for the estimation of breeding values and genetic improvement using EBV for feeding traits. © 2017 The Authors. Animal Science Journal published by John Wiley & Sons Australia, Ltd on behalf of Japanese Society of Animal Science.
A radiographic method to estimate lung volume and its use in small mammals.
Canals, Mauricio; Olivares, Ricardo; Rosenmann, Mario
2005-01-01
In this paper we develop a method to estimate lung volume using chest x-rays of small mammals. We applied this method to assess the lung volume of several rodents. We showed that a good estimator of the lung volume is: V*L = 0.496 x VRX approximately equal to 1/2 x VRX, where VRX is a measurement obtained from the x-ray that represents the volume of a rectangular box containing the lungs and mediastinum organs. The proposed formula may be interpreted as the volume of an ellipsoid formed by both lungs joined at their bases. When that relationship was used to estimate lung volume, values similar to those expected from allometric relationship were found in four rodents. In two others, M. musculus and R. norvegicus, lung volume was similar to reported data, although values were lower than expected.
Finding and estimating chemical property data for environmental assessment.
Boethling, Robert S; Howard, Philip H; Meylan, William M
2004-10-01
The ability to predict the behavior of a chemical substance in a biological or environmental system largely depends on knowledge of the physicochemical properties and reactivity of that substance. We focus here on properties, with the objective of providing practical guidance for finding measured values and using estimation methods when necessary. Because currently available computer software often makes it more convenient to estimate than to retrieve measured values, we try to discourage irrational exuberance for these tools by including comprehensive lists of Internet and hard-copy data resources. Guidance for assessors is presented in the form of a process to obtain data that includes establishment of chemical identity, identification of data sources, assessment of accuracy and reliability, substructure searching for analogs when experimental data are unavailable, and estimation from chemical structure. Regarding property estimation, we cover estimation from close structural analogs in addition to broadly applicable methods requiring only the chemical structure. For the latter, we list and briefly discuss the most widely used methods. Concluding thoughts are offered concerning appropriate directions for future work on estimation methods, again with an emphasis on practical applications.
Cvetković, V.; Niedermann, S.; Pejović, V.; Amthauer, G.; Boev, B.; Bosch, F.; Aničin, I.; Henning, W. F.
2016-01-01
Abstract This paper focuses on constraining the erosion rate in the area of the Allchar Sb‐As‐Tl‐Au deposit (Macedonia). It contains the largest known reserves of lorandite (TlAsS2), which is essential for the LORanditeEXperiment (LOREX), aimed at determining the long‐term solar neutrino flux. Because the erosion history of the Allchar area is crucial for the success of LOREX, we applied terrestrial in situ cosmogenic nuclides including both radioactive (26Al and 36Cl) and stable (3He and 21Ne) nuclides in quartz, dolomite/calcite, sanidine, and diopside. The obtained results suggest that there is accordance in the values obtained by applying 26Al, 36Cl, and 21Ne for around 85% of the entire sample collection, with resulting erosion rates varying from several tens of m/Ma to ∼165 m/Ma. The samples from four locations (L‐8 CD, L1b/R, L1c/R, and L‐4/ADR) give erosion rates between 300 and 400 m/Ma. Although these localities reveal remarkably higher values, which may be explained by burial events that occurred in part of Allchar, the erosion rate estimates mostly in the range between 50 and 100 m/Ma. This range further enables us to estimate the vertical erosion rate values for the two main ore bodies Crven Dol and Centralni Deo. We also estimate that the lower and upper limits of average paleo‐depths for the ore body Centralni Deo from 4.3 Ma to the present are 250–290 and 750–790 m, respectively, whereas the upper limit of paleo‐depth for the ore body Crven Dol over the same geological age is 860 m. The estimated paleo‐depth values allow estimating the relative contributions of 205Pb derived from pp‐neutrino and fast cosmic‐ray muons, respectively, which is an important prerequisite for the LOREX experiment. PMID:27587984
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
Donald, William A.; Leib, Ryan D.; O'Brien, Jeremy T.; Bush, Matthew F.; Williams, Evan R.
2008-01-01
In solution, half-cell potentials are measured relative to those of other half cells, thereby establishing a ladder of thermochemical values that are referenced to the standard hydrogen electrode (SHE), which is arbitrarily assigned a value of exactly 0 V. Although there has been considerable interest in, and efforts toward, establishing an absolute electrochemical half-cell potential in solution, there is no general consensus regarding the best approach to obtain this value. Here, ion-electron recombination energies resulting from electron capture by gas-phase nanodrops containing individual [M(NH3)6]3+, M = Ru, Co, Os, Cr, and Ir, and Cu2+ ions are obtained from the number of water molecules that are lost from the reduced precursors. These experimental data combined with nanodrop solvation energies estimated from Born theory and solution-phase entropies estimated from limited experimental data provide absolute reduction energies for these redox couples in bulk aqueous solution. A key advantage of this approach is that solvent effects well past two solvent shells, that are difficult to model accurately, are included in these experimental measurements. By evaluating these data relative to known solution-phase reduction potentials, an absolute value for the SHE of 4.2 ± 0.4 V versus a free electron is obtained. Although not achieved here, the uncertainty of this method could potentially be reduced to below 0.1 V, making this an attractive method for establishing an absolute electrochemical scale that bridges solution and gas-phase redox chemistry. PMID:18288835
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.
Keating, G A; Bogen, K T
2001-01-01
Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.
Taimouri, Vahid; Afacan, Onur; Perez-Rossello, Jeannette M.; Callahan, Michael J.; Mulkern, Robert V.; Warfield, Simon K.; Freiman, Moti
2015-01-01
Purpose: To evaluate the effect of the spatially constrained incoherent motion (SCIM) method on improving the precision and robustness of fast and slow diffusion parameter estimates from diffusion-weighted MRI in liver and spleen in comparison to the independent voxel-wise intravoxel incoherent motion (IVIM) model. Methods: We collected diffusion-weighted MRI (DW-MRI) data of 29 subjects (5 healthy subjects and 24 patients with Crohn’s disease in the ileum). We evaluated parameters estimates’ robustness against different combinations of b-values (i.e., 4 b-values and 7 b-values) by comparing the variance of the estimates obtained with the SCIM and the independent voxel-wise IVIM model. We also evaluated the improvement in the precision of parameter estimates by comparing the coefficient of variation (CV) of the SCIM parameter estimates to that of the IVIM. Results: The SCIM method was more robust compared to IVIM (up to 70% in liver and spleen) for different combinations of b-values. Also, the CV values of the parameter estimations using the SCIM method were significantly lower compared to repeated acquisition and signal averaging estimated using IVIM, especially for the fast diffusion parameter in liver (CVIV IM = 46.61 ± 11.22, CVSCIM = 16.85 ± 2.160, p < 0.001) and spleen (CVIV IM = 95.15 ± 19.82, CVSCIM = 52.55 ± 1.91, p < 0.001). Conclusions: The SCIM method characterizes fast and slow diffusion more precisely compared to the independent voxel-wise IVIM model fitting in the liver and spleen. PMID:25832079
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
Geodetic and Astrometric Measurements with Very-Long-Baseline Interferometry. Ph.D. Thesis - MIT
NASA Technical Reports Server (NTRS)
Robertson, D. S.
1975-01-01
The use of very-long-baseline interferometry (VLBI) observations for the estimation of geodetic and astrometric parameters is discussed. Analytic models for the dependence of delay and delay rate on these parameters are developed and used for parameter estimation by the method of weighted least squares. Results are presented from approximately 15,000 delay and delay-rate observations, obtained in a series of nineteen VLBI experiments involving a total of five stations on two continents. The closure of baseline triangles is investigated and found to be consistent with the scatter of the various baseline-component results. Estimates are made of the wobble of the earth's pole and of the irregularities in the earth's rotation rate. Estimates are also made of the precession constant and of the vertical Love number, for which a value of 0.55 + or - 0.05 was obtained.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Concentration of Ra-226 in Malaysian Drinking and Bottled Mineral Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amin, Y. B. Mohd; Jemangin, M. H.; Mahat, R. H.
2010-07-07
The concentration of the radionuclide {sup 226}Ra was determined in the drinking water which was taken from various sources. It was found that the concentration varies from non-detectable (ND) to highest value of 0.30 Bq per liter. The concentration was found to be high in mineral water as compare with surface water such as domestic pipe water. Some of these values have exceeded the EPA (Environmental Protection Agency) of America regulations. The activity concentrations obtained are compared with data from other countries. The estimated annual effective doses from drinking the water are determined. The values obtained range from 0.02 mSvmore » to about 0.06 mSv per year.« less
Chenel, Marylore; Bouzom, François; Cazade, Fanny; Ogungbenro, Kayode; Aarons, Leon; Mentré, France
2008-12-01
To compare results of population PK analyses obtained with a full empirical design (FD) and an optimal sparse design (MD) in a Drug-Drug Interaction (DDI) study aiming to evaluate the potential CYP3A4 inhibitory effect of a drug in development, SX, on a reference substrate, midazolam (MDZ). Secondary aim was to evaluate the interaction of SX on MDZ in the in vivo study. Methods To compare designs, real data were analysed by population PK modelling technique using either FD or MD with NONMEM FOCEI for SX and with NONMEM FOCEI and MONOLIX SAEM for MDZ. When applicable a Wald test was performed to compare model parameter estimates, such as apparent clearance (CL/F), across designs. To conclude on the potential interaction of SX on MDZ PK, a Student paired test was applied to compare the individual PK parameters (i.e. log(AUC) and log(C(max))) obtained either by a non-compartmental approach (NCA) using FD or from empirical Bayes estimates (EBE) obtained after fitting the model separately on each treatment group using either FD or MD. For SX, whatever the design, CL/F was well estimated and no statistical differences were found between CL/F estimated values obtained with FD (CL/F = 8.2 l/h) and MD (CL/F = 8.2 l/h). For MDZ, only MONOLIX was able to estimate CL/F and to provide its standard error of estimation with MD. With MONOLIX, whatever the design and the administration setting, MDZ CL/F was well estimated and there were no statistical differences between CL/F estimated values obtained with FD (72 l/h and 40 l/h for MDZ alone and for MDZ with SX, respectively) and MD (77 l/h and 45 l/h for MDZ alone and for MDZ with SX, respectively). Whatever the approach, NCA or population PK modelling, and for the latter approach, whatever the design, MD or FD, comparison tests showed that there was a statistical difference (P < 0.0001) between individual MDZ log(AUC) obtained after MDZ administration alone and co-administered with SX. Regarding C(max), there was a statistical difference (P < 0.05) between individual MDZ log(C(max)) obtained under the 2 administration settings in all cases, except with the sparse design with MONOLIX. However, the effect on C(max) was small. Finally, SX was shown to be a moderate CYP3A4 inhibitor, which at therapeutic doses increased MDZ exposure by a factor of 2 in average and almost did not affect the C(max). The optimal sparse design enabled the estimation of CL/F of a CYP3A4 substrate and inhibitor when co-administered together and to show the interaction leading to the same conclusion as the full empirical design.
Chenel, Marylore; Bouzom, François; Cazade, Fanny; Ogungbenro, Kayode; Aarons, Leon; Mentré, France
2008-01-01
Purpose To compare results of population PK analyses obtained with a full empirical design (FD) and an optimal sparse design (MD) in a Drug-Drug Interaction (DDI) study aiming to evaluate the potential CYP3A4 inhibitory effect of a drug in development, SX, on a reference substrate, midazolam (MDZ). Secondary aim was to evaluate the interaction of SX on MDZ in the in vivo study. Methods To compare designs, real data were analysed by population PK modelling using either FD or MD with NONMEM FOCEI for SX and with NONMEM FOCEI and MONOLIX SAEM for MDZ. When applicable a Wald’s test was performed to compare model parameter estimates, such as apparent clearance (CL/F), across designs. To conclude on the potential interaction of SX on MDZ PK, a Student paired test was applied to compare the individual PK parameters (i.e. log(AUC) and log(Cmax)) obtained either by a non-compartmental approach (NCA) using FD or from empirical Bayes estimates (EBE) obtained after fitting the model separately on each treatment group using either FD or MD. Results For SX, whatever the design, CL/F was well estimated and no statistical differences were found between CL/F estimated values obtained with FD (CL/F = 8.2 L/h) and MD (CL/F = 8.2 L/h). For MDZ, only MONOLIX was able to estimate CL/F and to provide its standard error of estimation with MD. With MONOLIX, whatever the design and the administration setting, MDZ CL/F was well estimated and there were no statistical differences between CL/F estimated values obtained with FD (72 L/h and 40 L/h for MDZ alone and for MDZ with SX, respectively) and MD (77 L/h and 45 L/h for MDZ alone and for MDZ with SX, respectively). Whatever the approach, NCA or population PK modelling, and for the latter approach, whatever the design, MD or FD, comparison tests showed that there was a statistical difference (p<0.0001) between individual MDZ log(AUC) obtained after MDZ administration alone and co-administered with SX. Regarding Cmax, there was a statistical difference (p<0.05) between individual MDZ log(Cmax) obtained under the 2 administration settings in all cases, except with the sparse design with MONOLIX. However, the effect on Cmax was small. Finally, SX was shown to be a moderate CYP3A4 inhibitor, which at therapeutic doses increased MDZ exposure by a factor 2 in average and almost did not affect the Cmax. Conclusion The optimal sparse design enabled the estimation of CL/F of a CYP3A4 substrate and inhibitor when co-administered together and to show the interaction leading to the same conclusion than the full empirical design. PMID:19130187
A kinetic estimate of the free aldehyde content of aldoses
NASA Technical Reports Server (NTRS)
Dworkin, J. P.; Miller, S. L.; Bada, J. L. (Principal Investigator)
2000-01-01
The relative free aldehyde content of eight hexoses and four pentoses has been estimated within about 10% from the rate constants for their reaction with urazole (1,2,4-triazole-3,5-dione). These values of the percent free aldehyde are in agreement with those estimated from CD measurements, but are more accurate. The relative free aldehyde contents for the aldoses were then correlated to various literature NMR measurements to obtain the absolute values. This procedure was also done for three deoxyaldoses, which react much more rapidly than can be accounted for by the free aldehyde content. This difference in reactivity between aldoses and deoxyaldoses is due to the inductive effect of the H versus the OH on C-2'. This may help explain why deoxyribonucleosides hydrolyze much more rapidly than ribonucleosides.
Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.
NASA Astrophysics Data System (ADS)
Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.
2006-01-01
This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Determination of tidal h Love number parameters in the diurnal band using an extensive VLBI data set
NASA Technical Reports Server (NTRS)
Mitrovica, J. X.; Davis, J. L.; Mathews, P. M.; Shapiro, I. I.
1994-01-01
We use over a decade of geodetic Very Long Baseline Interferometry (VLBI) data to estimate parameters in a resonance expansion of the frequency dependence of the tidal h(sub 2) Love number within the diurnal band. The resonance is associated with the retrograde free core nutation (RFCN). We obtain a value for the real part of the resonance strength of (-0.27 +/- 0.03) x 10(exp -3); a value of -0.19 x 10(exp -3) is predicted theoretically. Uncertainties in the VLBI estimates of the body tide radial displacement amplitudes are approximately 0.5 mm (1.1 mm for the K1 frequency), but they do not yield sufficiently small Love number uncertainties for placing useful constraints on the frequency of the RFCN, given the much smaller uncertainties obtained from independent analyses using nutation or gravimetric data. We also consider the imaginary part of the tidal h(sub 2) Love number. The estimated imaginary part of the resonance strength is (0.00 +/- 0.02) x 10(exp -3). The estimated imaginary part of the nonresonant component of the Love number implies a phase angle in the diurnal tidal response of the Earth of 0.7 deg +/- 0.5 deg (lag).
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
2007-01-01
and frequency transfer ( TWSTFT ) were performed along three transatlantic links over the 6-month period 29 January – 31 July 2006. The GPSCPFT and... TWSTFT results were subtracted in order to estimate the combined uncertainty of the methods. The frequency values obtained from GPSCPFT and TWSTFT ...values were equal to or less than the frequency-stability values σy(GPSCPFT) – y( TWSTFT ) (τ) (or TheoBR (τ)) computed for the corresponding averaging
Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew
2012-01-01
The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.
An empirical method to estimate shear wave velocity of soils in the New Madrid seismic zone
Wei, B.-Z.; Pezeshk, S.; Chang, T.-S.; Hall, K.H.; Liu, Huaibao P.
1996-01-01
In this study, a set of charts are developed to estimate shear wave velocity of soils in the New Madrid seismic zone (NMSZ), using the standard penetration test (SPT) N values and soil depths. Laboratory dynamic test results of soil samples collected from the NMSZ showed that the shear wave velocity of soils is related to the void ratio and the effective confining pressure applied to the soils. The void ratio of soils can be estimated from the SPT N values and the effective confining pressure depends on the depth of soils. Therefore, the shear wave velocity of soils can be estimated from the SPT N value and the soil depth. To make the methodology practical, two corrections should be made. One is that field SPT N values of soils must be adjusted to an unified SPT N??? value to account the effects of overburden pressure and equipment. The second is that the effect of water table to effective overburden pressure of soils must be considered. To verify the methodology, shear wave velocities of five sites in the NMSZ are estimated and compared with those obtained from field measurements. The comparison shows that our approach and the field tests are consistent with an error of less than of 15%. Thus, the method developed in this study is useful for dynamic study and practical designs in the NMSZ region. Copyright ?? 1996 Elsevier Science Limited.
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
NASA Astrophysics Data System (ADS)
Lee, H.; Sheen, D.; Kim, S.
2013-12-01
The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.
Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review
NASA Technical Reports Server (NTRS)
Tripp, John S.
1999-01-01
This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.
[New non-volumetric method for estimating peroperative blood loss].
Tachoires, D; Mourot, F; Gillardeau, G
1979-01-01
The authors have developed a new method for the estimation of peroperative blood loss by measurement of the haematocrit of a fluid obtained by diluting the blood from swabs in a known volume of isotonic saline solution. This value, referred to a monogram, may be used to assess the volume of blood impregnating the compresses, in relation to the pre-operative or present haematocrit of the patient, by direct reading. The precision of the method is discussed. The results obtained justified its routine application in surgery in children, patients with cardiac failure and in all cases requiring precise compensation of per-operative blood loss.
[Monetary value of the human costs of road traffic injuries in Spain].
Martínez Pérez, Jorge Eduardo; Sánchez Martínez, Fernando Ignacio; Abellán Perpiñán, José María; Pinto Prades, José Luis
2015-09-01
Cost-benefit analyses in the field of road safety compute human costs as a key component of total costs. The present article presents two studies promoted by the Directorate-General for Traffic aimed at obtaining official values for the costs associated with fatal and non-fatal traffic injuries in Spain. We combined the contingent valuation approach and the (modified) standard gamble technique in two surveys administered to large representative samples (n1=2,020, n2=2,000) of the Spanish population. The monetary value of preventing a fatality was estimated to be 1.4 million euros. Values of 219,000 and 6,100 euros were obtained for minor and severe non-fatal injuries, respectively. These figures are comparable to those observed in neighboring countries. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
The role of remotely-sensed evapotranspiration data in watershed water resources management
NASA Astrophysics Data System (ADS)
Shuster, W.; Carroll, M.; Zhang, Y.
2006-12-01
Evapotranspiration (ET) is an important component of the watershed hydrologic cycle and a key factor to consider in water resource planning. Partly due to the loss of evaporation pans from the national network in the 1980s because of budget cuts, ET values are not available in many locations in the US and practitioners often have to rely on the climatically averaged regional estimates instead. Several new approaches have been developed for estimating ET via remote sensing. In this study we employ one established approach that allows us to derive ET estimates on 1 km2 resolution on the basis of AVHRR brightness temperature. By applying this method to southwestern Ohio we obtain ET estimates for a 2 km2 partially suburban watershed near Cincinnati, OH. Along with precipitation and surface discharge measurements, these remotely-sensed ET estimates form the basis for determining both long and short term water budgets for this watershed. These ET estimates are next compared with regional climatic values on a seasonal basis to examine the potential differences that can be introduced to our conceptualization of the watershed processes by considering area- specific ET values. We then discuss implications of this work for more widespread application to watershed management imperatives (e.g., stream ecological health).
Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco
2015-01-01
Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of Harmonic Motion Imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5) and in vitro canine livers (n=3) were tested, as well as HIFU lesions in in vitro canine livers (n=5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R2=0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32±0.03 dB/cm/MHz, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58±0.06 dB/cm/MHz) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation. PMID:26371501
Summary of fuel economy performance
DOT National Transportation Integrated Search
2009-03-30
This report contains estimated fleet production numbers and CAFE figures obtained from pre-model year (source 1) and mid-model year (source 2) documents assembled prior to or during the model year. The actual mpg values reported to EPA at the end of ...
Summary of fuel economy performance
DOT National Transportation Integrated Search
2010-04-20
This report contains estimated fleet production numbers and CAFE figures obtained from pre-model year (source I) and mid-model year (source 2) documents assembled prior to or during the model year. The actual mpg values reported to EPA at the end of ...
Summary of fuel economy performance
DOT National Transportation Integrated Search
2009-12-09
This report contains estimated fleet production numbers and CAFE figures obtained from pre-model year (source I) and mid-model year (source 2) documents assembled prior to or during the model year. The actual mpg values reported to EPA at the end of ...
NASA Astrophysics Data System (ADS)
Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian
2014-06-01
Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.
Blood flow estimation in gastroscopic true-color images
NASA Astrophysics Data System (ADS)
Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans
1995-05-01
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
The evaluation of maximum horizontal in-situ stress using the wellbore imagers data
NASA Astrophysics Data System (ADS)
Dubinya, N. V.; Ezhov, K. A.
2016-12-01
Well drilling provides a number of possibilities to improve the knowledge of stress state of the upper layers of the Earth crust. The data obtained from drilling, well logging, core experiments and special tests is used to evaluate the principal stresses' directions and magnitudes. Although the values of vertical stress and minimum horizontal stress may be decently estimated, the maximum horizontal stress remains a major problem. In this study a new method to estimate this value is proposed. The suggested approach is based on the concept of hydraulically conductive and non-conductive fractures near a wellbore (Barton, Zoback and Moos, 1995). It was stated that all the fractures which properties may be acquired from well logging data can be divided into two groups regarding hydraulic conductivity. The fracture properties and the in-situ stress state are put in relationship via the Mohr diagram. This approach was later used by Ito and Zoback (2000) to estimate the magnitude of the maximum horizontal stress from the temperature profiles. In the current study ultrasonic and resistivity borehole imaging are used to estimate the magnitude of maximum horizontal stress in rather precise way. After proper interpretation one is able to obtain orientation and hydraulic conductivity for each fracture appeared at the images. If the proper profiles of vertical and minimum horizontal stresses are known all the fractures may be analyzed at the Mohr diagram. Alteration of maximum horizontal stress profile grants an opportunity to adjust it so the conductive fractures at the Mohr diagram fit the data from imagers' interpretation. The precision of the suggested approach was evaluated for several oil production wells in Siberia with decent wellbore stability models. It appeared that the difference between maximum horizontal stress estimated in a suggested approach and the values obtained from drilling reports did not exceed 0.5 MPa. Thus the proposed approach may be used to evaluate the values of maximum horizontal stress using the wellbore imagers' data. References Barton, C.A., Zoback, M.D., Moos, D. Fluid flow along potentially active faults in crystalline rock - Geology, 1995. T. Ito, M. Zoback, Fracture permeability and in situ stress to 7 km depth in the KTB Scientific Drillhole, Geophysical Research Letters, 2000.
Merritt, Michael L.
2004-01-01
Aquifers are subjected to mechanical stresses from natural, non-anthropogenic, processes such as pressure loading or mechanical forcing of the aquifer by ocean tides, earth tides, and pressure fluctuations in the atmosphere. The resulting head fluctuations are evident even in deep confined aquifers. The present study was conducted for the purpose of reviewing the research that has been done on the use of these phenomena for estimating the values of aquifer properties, and determining which of the analytical techniques might be useful for estimating hydraulic properties in the dissolved-carbonate hydrologic environment of southern Florida. Fifteen techniques are discussed in this report, of which four were applied.An analytical solution for head oscillations in a well near enough to the ocean to be influenced by ocean tides was applied to data from monitor zones in a well near Naples, Florida. The solution assumes a completely non-leaky confining unit of infinite extent. Resulting values of transmissivity are in general agreement with the results of aquifer performance tests performed by the South Florida Water Management District. There seems to be an inconsistency between results of the amplitude ratio analysis and independent estimates of loading efficiency. A more general analytical solution that takes leakage through the confining layer into account yielded estimates that were lower than those obtained using the non-leaky method, and closer to the South Florida Water Management District estimates. A numerical model with a cross-sectional grid design was applied to explore additional aspects of the problem.A relation between specific storage and the head oscillation observed in a well provided estimates of specific storage that were considered reasonable. Porosity estimates based on the specific storage estimates were consistent with values obtained from measurements on core samples. Methods are described for determining aquifer diffusivity by comparing the time-varying drawdown in an open well with periodic pressure-head oscillations in the aquifer, but the applicability of such methods might be limited in studies of the Floridan aquifer system.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
NASA Technical Reports Server (NTRS)
Suit, William T.
1989-01-01
Estimates of longitudinal stability and control parameters for the space shuttle were determined by applying a maximum likelihood parameter estimation technique to Challenger flight test data. The parameters for pitching moment coefficient, C(m sub alpha), (at different angles of attack), pitching moment coefficient, C(m sub delta e), (at different elevator deflections) and the normal force coefficient, C(z sub alpha), (at different angles of attack) describe 90 percent of the response to longitudinal inputs during Space Shuttle Challenger flights with C(m sub delta e) being the dominant parameter. The values of C(z sub alpha) were found to be input dependent for these tests. However, when C(z sub alpha) was set at preflight predictions, the values determined for C(m sub delta e) changed less than 10 percent from the values obtained when C(z sub alpha) was estimated as well. The preflight predictions for C(z sub alpha) and C(m sub alpha) are acceptable values, while the values of C(z sub delta e) should be about 30 percent less negative than the preflight predictions near Mach 1, and 10 percent less negative, otherwise.
Probabilities and statistics for backscatter estimates obtained by a scatterometer
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1991-01-01
Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.
Nuclear half-lives for {alpha}-radioactivity of elements with 100 {<=} Z {<=} 130
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, P. Roy; Samanta, C.; Physics Department, Gottwald Science Center, University of Richmond, Richmond, VA 23173
2008-11-15
Theoretical estimates for the half-lives of about 1700 isotopes of heavy elements with 100 {<=} Z {<=} 130 are tabulated using theoretical Q-values. The quantum mechanical tunneling probabilities are calculated within a WKB framework using microscopic nuclear potentials. The microscopic nucleus-nucleus potentials are obtained by folding the densities of interacting nuclei with a density-dependent M3Y effective nucleon-nucleon interaction. The {alpha}-decay half-lives calculated in this formalism using the experimental Q-values were found to be in good agreement over a wide range of experimental data spanning about 20 orders of magnitude. The theoretical Q-values used for the present calculations are extracted frommore » three different mass estimates viz. Myers-Swiatecki, Muntian-Hofmann-Patyk-Sobiczewski, and Koura-Tachibana-Uno-Yamada.« less
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Mondlane, Gracinda; Ureba, Ana; Gubanski, Michael; Lind, Pehr A; Siegbahn, Albert
2018-05-01
Gastric cancer (GC) radiotherapy involves irradiation of large tumour volumes located in the proximities of critical structures. The advantageous dose distributions produced by scanned-proton beams could reduce the irradiated volumes of the organs at risk (OARs). However, treatment-induced side-effects may still appear. The aim of this study was to estimate the normal tissue complication probability (NTCP) following proton therapy of GC, compared to photon radiotherapy. Eight GC patients, previously treated with volumetric-modulated arc therapy (VMAT), were retrospectively planned with scanned proton beams carried out with the single-field uniform-dose (SFUD) method. A beam-specific planning target volume was used for spot positioning and a clinical target volume (CTV) based robust optimisation was performed considering setup- and range-uncertainties. The dosimetric and NTCP values obtained with the VMAT and SFUD plans were compared. With SFUD, lower or similar dose-volume values were obtained for OARs, compared to VMAT. NTCP values of 0% were determined with the VMAT and SFUD plans for all OARs (p>0.05), except for the left kidney (p<0.05), for which lower toxicity was estimated with SFUD. The NTCP reduction, determined for the left kidney with SFUD, can be of clinical relevance for preserving renal function after radiotherapy of GC. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Lateral control system design for VTOL landing on a DD963 in high sea states. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bodson, M.
1982-01-01
The problem of designing lateral control systems for the safe landing of VTOL aircraft on small ships is addressed. A ship model is derived. The issues of estimation and prediction of ship motions are discussed, using optimal linear linear estimation techniques. The roll motion is the most important of the lateral motions, and it is found that it can be predicted for up to 10 seconds in perfect conditions. The automatic landing of the VTOL aircraft is considered, and a lateral controller, defined as a ship motion tracker, is designed, using optimal control techniqes. The tradeoffs between the tracking errors and the control authority are obtained. The important couplings between the lateral motions and controls are demonstrated, and it is shown that the adverse couplings between the sway and the roll motion at the landing pad are significant constraints in the tracking of the lateral ship motions. The robustness of the control system, including the optimal estimator, is studied, using the singular values analysis. Through a robustification procedure, a robust control system is obtained, and the usefulness of the singular values to define stability margins that take into account general types of unstructured modelling errors is demonstrated. The minimal destabilizing perturbations indicated by the singular values analysis are interpreted and related to the multivariable Nyquist diagrams.
Yoshioka, Sumie; Aso, Yukio; Kojima, Shigeo
2003-06-01
To examine whether the glass transition temperature (Tg) of freeze-dried formulations containing polymer excipients can be accurately predicted by molecular dynamics simulation using software currently available on the market. Molecular dynamics simulations were carried out for isomaltodecaose, a fragment of dextran, and alpha-glucose, the repeated unit of dextran. in the presence or absence of water molecules. Estimated values of Tg were compared with experimental values obtained by differential scanning calorimetry (DSC). Isothermal-isobaric molecular dynamics simulations (NPTMD) and isothermal molecular dynamics simulations at a constant volume (NVTMD) were carried out using the software package DISCOVER (Material Studio) with the Polymer Consortium Force Field. Mean-squared displacement and radial distribution function were calculated. NVTMD using the values of density obtained by NPTMD provided the diffusivity of glucose-ring oxygen and water oxygen in amorphous alpha-glucose and isomaltodecaose, which exhibited a discontinuity in temperature dependence due to glass transition. Tg was estimated to be approximately 400K and 500K for pure amorphous a-glucose and isomaltodecaose, respectively, and in the presence of one water molecule per glucose unit, Tg was 340K and 360K, respectively. Estimated Tg values were higher than experimentally determined values because of the very fast cooling rates in the simulations. However, decreases in Tg on hydration and increases in Tg associated with larger fragment size could be demonstrated. The results indicate that molecular dynamics simulation is a useful method for investigating the effects of hydration and molecular weight on the Tg of lyophilized formulations containing polymer excipients. although the relationship between cooling rates and Tg must first be elucidated to predict Tg vales observed by DSC measurement. January 16.
Tietze, Anna; Mouridsen, Kim; Mikkelsen, Irene Klærke
2015-06-01
Accurate quantification of hemodynamic parameters using dynamic contrast enhanced (DCE) MRI requires a measurement of tissue T 1 prior to contrast injection (T 1). We evaluate (i) T 1 estimation using the variable flip angle (VFA) and the saturation recovery (SR) techniques and (ii) investigate if accurate estimation of DCE parameters outperform a time-saving approach with a predefined T 1 value when differentiating high- from low-grade gliomas. The accuracy and precision of T 1 measurements, acquired by VFA and SR, were investigated by computer simulations and in glioma patients using an equivalence test (p > 0.05 showing significant difference). The permeability measure, K trans, cerebral blood flow (CBF), and - volume, V p, were calculated in 42 glioma patients, using fixed T 1 of 1500 ms or an individual T 1 measurement, using SR. The areas under the receiver operating characteristic curves (AUCs) were used as measures for accuracy to differentiate tumor grade. The T 1 values obtained by VFA showed larger variation compared to those obtained using SR both in the digital phantom and the human data (p > 0.05). Although a fixed T 1 introduced a bias into the DCE calculation, this had only minor impact on the accuracy differentiating high-grade from low-grade gliomas, (AUCfix = 0.906 and AUCind = 0.884 for K trans; AUCfix = 0.863 and AUCind = 0.856 for V p; p for AUC comparison > 0.05). T 1 measurements by VFA were less precise, and the SR method is preferable, when accurate parameter estimation is required. Semiquantitative DCE values, based on predefined T 1 values, were sufficient to perform tumor grading in our study.
NASA Astrophysics Data System (ADS)
Gribovszki, Zoltán
2018-05-01
Methods that use diurnal groundwater-level fluctuations are commonly used for shallow water-table environments to estimate evapotranspiration (ET) and recharge. The key element needed to obtain reliable estimates is the specific yield (Sy), a soil-water storage parameter that depends on unsaturated soil-moisture and water-table fluxes, among others. Soil-moisture profile measurement down to the water table, along with water-table-depth measurements, can provide a good opportunity to calculate Sy values even on a sub-daily scale. These values were compared with Sy estimates derived by traditional techniques, and it was found that slug-test-based Sy values gave the most similar results in a sandy soil environment. Therefore, slug-test methods, which are relatively cheap and require little time, were most suited to estimate Sy using diurnal fluctuations. The reason for this is that the timeframe of the slug-test measurement is very similar to the dynamic of the diurnal signal. The dynamic characteristic of Sy was also analyzed on a sub-daily scale (depending mostly on the speed of drainage from the soil profile) and a remarkable difference was found in Sy with respect to the rate of change of the water table. When comparing constant and sub-daily (dynamic) Sy values for ET estimation, the sub-daily Sy application yielded higher correlation, but only a slightly smaller deviation from the control ET method, compared with the usage of constant Sy.
Validity of Three-Dimensional Photonic Scanning Technique for Estimating Percent Body Fat.
Shitara, K; Kanehisa, H; Fukunaga, T; Yanai, T; Kawakami, Y
2013-01-01
Three-dimensional photonic scanning (3DPS) was recently developed to measure dimensions of a human body surface. The purpose of this study was to explore the validity of body volume measured by 3DPS for estimating the percent body fat (%fat). Design, setting, participants, and measurement: The body volumes were determined by 3DPS in 52 women. The body volume was corrected for residual lung volume. The %fat was estimated from body density and compared with the corresponding reference value determined by the dual-energy x-ray absorptiometry (DXA). No significant difference was found for the mean values of %fat obtained by 3DPS (22.2 ± 7.6%) and DXA (23.5 ± 4.9%). The root mean square error of %fat between 3DPS and reference technique was 6.0%. For each body segment, there was a significant positive correlation between 3DPS- and DXA-values, although the corresponding value for the head was slightly larger in 3DPS than in DXA. Residual lung volume was negatively correlated with the estimated error in %fat. The body volume determined with 3DPS is potentially useful for estimating %fat. A possible strategy for enhancing the measurement accuracy of %fat might be to refine the protocol for preparing the subject's hair prior to scanning and to improve the accuracy in the measurement of residual lung volume.
Sayed, Mohammed E; Porwal, Amit; Al-Faraj, Nida A; Bajonaid, Amal M; Sumayli, Hassan A
2017-07-01
Several techniques and methods have been proposed to estimate the anterior teeth dimensions in edentulous patients. However, this procedure remains challenging especially when preextraction records are not available. Therefore, the purpose of this study is to evaluate some of the existing extraoral and intraoral methods for estimation of anterior tooth dimensions and to propose a novel method for estimation of central incisor width (CIW) and length (CIL) for Saudi population. Extraoral and intraoral measurements were recorded for a total of 236 subjects. Descriptive statistical analysis and Pearson's correlation tests were performed. Association was evaluated between combined anterior teeth width (CATW) and interalar width (IAW), intercommisural width (ICoW) and interhamular notch distance (IHND) plus 10 mm. Evaluation of the linear relationship between central incisor length (CIL) with facial height (FH) and CIW with bizygomatic width (BZW) was also performed. Significant correlation was found between the CATW and ICoW and IAW (p-values <0.0001); however, no correlation was found relative to IHND plus 10 mm (p-value = 0.456). Further, no correlation was found between the FH and right CIL and BZW and right CIW (p-values = 0.255 and 0.822). The means of CIL, CIW, incisive papillae-fovea palatinae (IP-FP), and IHND were used to estimate the central incisor dimensions: CIL = FP-IP distance/4.45, CIW = IHND/4.49. It was concluded that the ICoW and IAW measurements are the only predictable methods to estimate the initial reference value for CATW. A proposed intraoral approach was hypothesized for estimation of CIW and CIL for the given population. Based on the results of the study, ICoW and IAW measurements can be useful in estimating the initial reference value for CATW, while the proposed novel approach using specific palatal dimensions can be used for estimating the width and length of central incisors. These methods are crucial to obtain esthetic treatment results within the parameters of the given population.
Stepanenko, Valeriy F; Hoshi, Masaharu; Bailiff, Ian K; Ivannikov, Alexander I; Toyoda, Shin; Yamamoto, Masayoshi; Simon, Steven L; Matsuo, Masatsugu; Kawano, Noriyuki; Zhumadilov, Zhaxybay; Sasaki, Masao S; Rosenson, Rafail I; Apsalikov, Kazbek N
2006-02-01
The paper is an analytical overview of the main results presented at the 3rd Dosimetry Workshop in Hiroshima(9-11 of March 2005), where different aspects of the dose reconstruction around the Semipalatinsk nuclear test site(SNTS) were discussed and summarized. The results of the international intercomparison of the retrospective luminescence dosimetry(RLD) method for Dolon' village(Kazakhstan) were presented at the Workshop and good concurrence between dose estimations by different laboratories from 6 countries (Japan, Russia, USA, Germany, Finland and UK) was pointed out. The accumulated dose values in brick for a common depth of 10mm depth obtained independently by all participating laboratories were in good agreement for all four brick samples from Dolon' village, Kazakhstan, with the average value of the local gamma dose due to fallout (near the sampling locations) being about 220 mGy(background dose has been subtracted).Furthermore, using a conversion factor of about 2 to obtain the free-in-air dose, a value of local dose approximately 440 mGy is obtained, which supports the results of external dose calculations for Dolon': recently published soil contamination data, archive information and new models were used for refining dose calculations and the external dose in air for Dolon village was estimated to be about 500 mGy. The results of electron spin resonance(ESR) dosimetry with tooth enamel have demonstrated the notable progress in application of ESR dosimetry to the problems of dose reconstruction around the Semipalatinsk nuclear test site. At the present moment, dose estimates by the ESR method have become more consistent with calculated values and with retrospective luminescence dosimetry data, but differences between ESR dose estimates and RLD/calculation data were noted. For example mean ESR dose for eligible tooth samples from Dolon' village was estimated to be about 140 mGy(above background dose), which is less than dose values obtained by RLD and calculations. A possible explanation of the differences between ESR and RLD/calculations doses is the following: for interpretation of ESR data the "shielding and behaviour" factors for investigated persons should be taken into account. The "upper level" of the combination of "shielding and behaviour" factors of dose reduction for inhabitants of Dolon' village of about 0.28 was obtained by comparing the individual ESR tooth enamel dose estimates with the calculated mean dose for this settlement. The biological dosimetry data related to the settlements near SNTS were presented at the Workshop. A higher incidence of unstable chromosome aberrations, micronucleus in lymphocytes, nuclear abnormalities of thyroid follicular cells, T-cell receptor mutations in peripheral blood were found for exposed areas (Dolon', Sarjal) in comparison with unexposed ones(Kokpekty). The significant greater frequency of stable translocations (results of analyses of chromosome aberrations in lymphocytes by the FISH technique) was demonstrated for Dolon' village in comparison with Chekoman(unexposed village). The elevated level of stable translocations in Dolon' corresponds to a dose of about 180 mSv, which is close to the results of ESR dosimetry for this village. The importance of investigating specific morphological types of thyroid nodules for thyroid dosimetry studies was pointed out. In general the 3rd Dosimetry Workshop has demonstrated remarkable progress in developing an international level of common approaches for retrospective dose estimations around the SNTS and in understanding the tasks for the future joint work in this direction. In the framework of a special session the problems of developing a database and registry in order to support epidemiological studies around SNTS were discussed. The results of investigation of psychological consequences of nuclear tests, which are expressed in the form of verbal behaviour, were presented at this session as well.
Armando García-Miranda, L; Contreras, I; Estrada, J A
2014-04-01
To determine reference values for full blood count parameters in a population of children 8 to 12 years old, living at an altitude of 2760 m above sea level. Our sample consisted of 102 individuals on whom a full blood count was performed. The parameters included: total number of red blood cells, platelets, white cells, and a differential count (millions/μl and %) of neutrophils, lymphocytes, monocytes, eosinophils and basophils. Additionally, we obtained values for hemoglobin, hematocrit, mean corpuscular volume, mean corpuscular hemoglobin, concentration of corpuscular hemoglobin and red blood cell distribution width. The results were statistically analyzed with a non-parametric test, to divide the sample in quartiles and obtain the lower and upper limits for our intervals. Moreover, the values for the intervals obtained from this analysis were compared to intervals obtained estimating+- 2 standard deviations above and below from our mean values. Our results showed significant differences compared to normal interval values reported for the adult Mexican population in most of the parameters studied. The full blood count is an important laboratory test used routinely for the initial assessment of a patient. Values of full blood counts in healthy individuals vary according to gender, age and geographic location; therefore, each population should have its own reference values. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
Use of Landsat data to predict the trophic state of Minnesota lakes
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Johnson, W. L.; Deuell, R. L.; Lindstrom, O. M.; Meisner, D. E.
1983-01-01
Near-concurrent Landsat Multispectral Scanner (MSS) and ground data were obtained for 60 lakes distributed in two Landsat scene areas. The ground data included measurement of secchi disk depth, chlorophyll-a, total phosphorous, turbidity, color, and total nitrogen, as well as Carlson Trophic State Index (TSI) values derived from the first three parameters. The Landsat data best correlated with the TSI values. Prediction models were developed to classify some 100 'test' lakes appearing in the two analysis scenes on the basis of TSI estimates. Clouds, wind, poor image data, small lake size, and shallow lake depth caused some problems in lake TSI prediction. Overall, however, the Landsat-predicted TSI estimates were judged to be very reliable for the secchi-derived TSI estimation, moderately reliable for prediction of the chlorophyll-a TSI, and unreliable for the phosphorous value. Numerous Landsat data extraction procedures were compared, and the success of the Landsat TSI prediction models was a strong function of the procedure employed.
Kharroubi, Samer A; O'Hagan, Anthony; Brazier, John E
2010-07-10
Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field. Copyright 2010 John Wiley & Sons, Ltd.
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
The first three rungs of the cosmological distance ladder
NASA Astrophysics Data System (ADS)
Krisciunas, Kevin; DeBenedictis, Erika; Steeger, Jeremy; Bischoff-Kim, Agnes; Tabak, Gil; Pasricha, Kanika
2012-05-01
It is straightforward to determine the size of the Earth and the distance to the Moon without using a telescope. The methods have been known since the third century BCE. However, few astronomers have done this measurement from data they have taken. We use a gnomon to determine the latitude and longitude of South Bend, Indiana, and College Station, Texas, and determine the value of the radius of the Earth to be Rearth=6290 km, only 1.4% smaller than the known value. We use the method of Aristarchus and the size of the Earth's shadow during the lunar eclipse of June 15, 2011 to estimate the distance to the Moon to be 62.3Rearth, 3.3% greater than the known mean value. We use measurements of the angular motion of the Moon against the background stars over the course of two nights, using a simple cross staff device, to estimate the Moon's distance at perigee and apogee. We use simultaneous observations of asteroid 1996 HW1 obtained with small telescopes in Socorro, New Mexico, and Ojai, California, to obtain a value of the Astronomical Unit of (1.59+/-0.19)×108 km, about 6% too large. The data and methods presented here can easily become part of an introductory astronomy laboratory class.
NASA Astrophysics Data System (ADS)
Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav
2004-08-01
Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.
On curve and surface stretching in turbulent flow
NASA Technical Reports Server (NTRS)
Etemadi, Nassrollah
1989-01-01
Cocke (1969) proved that in incompressible, isotropic turbulence the average material line (material surface) elements increase in comparison with their initial values. Good estimates of how much they increase in terms of the eigenvalues of the Green deformation tensor were rigorously obtained.
A Study of Quenching Cooling in Gaseous Atmospheres
NASA Astrophysics Data System (ADS)
Shevchenko, S. Yu.; Smirnov, A. E.; Kirillov, I. V.; Kurpyakova, N. A.
2016-11-01
Prismatic sensors of two standard sizes are used to determine the heat-transfer coefficients of high-pressure nitrogen at different turbine rotor speeds of a SECO/WARWICK 10.0VPT-4020/24N vacuum furnace. The adequacy of the values obtained is estimated.
Supraorbital Keyhole Craniotomy for Basilar Artery Aneurysms: Accounting for the "Cliff" Effect.
Stamates, Melissa M; Wong, Andrew K; Bhansali, Anita; Wong, Ricky H
2017-04-01
Treatment of basilar artery aneurysms is challenging. While endovascular techniques have dominated, there still remain circumstances where open surgical clipping is required or preferred. Minimally invasive "keyhole" approaches are being used more frequently to provide the durability of surgical clipping with a lower morbidity profile; however, careful patient selection is required. The supraorbital "keyhole" approach has been described for the treatment of basilar artery aneurysms, but careful assessment of the basilar exposure is necessary to ensure proper visualization of the aneurysm and ability to obtain proximal vascular control. Various methods of estimating the basilar artery exposure in this approach have been described, including the anterior skull base line and the posterior clinoid line, but both are unreliable and inaccurate. To propose a new method, the orbital roof-dorsum line, to simply and accurately predict the basilar artery exposure. CT angiograms for 20 consecutive unique patients were analyzed to obtain the anterior skull base line, posterior clinoid line, and the orbital roof-dorsum line. CT angiograms were then loaded onto a Stealth neuronavigation system (Medtronic, Minneapolis, Minnesota) to obtain "true" visualization lengths. A case illustration is presented. Pairwise comparison tests demonstrated that both the anterior skull base and the posterior clinoid estimation lines differed significantly from the "true" value ( P < .0001). Our orbital roof-dorsum estimation provided results that accurately predicted the "true" value ( P = .71). The orbital roof-dorsum line provides a simple and reliable method of estimating basilar artery exposure and should be used whenever considering patients for surgical clipping by this approach. Copyright © 2017 by the Congress of Neurological Surgeons
Infant mortality in the Marshall Islands.
Levy, S J; Booth, H
1988-12-01
Levy and Booth present previously unpublished infant mortality rates for the Marshall Islands. They use an indirect method to estimate infant mortality from the 1973 and 1980 censuses, then apply indirect and direct methods of estimation to data from the Marshall Islands Women's Health Survey of 1985. Comparing the results with estimates of infant mortality obtained from vital registration data enables them to estimate the extent of underregistration of infant deaths. The authors conclude that 1973 census appears to be the most valid information source. Direct estimates from the Women's Health Survey data suggest that infant mortality has increased since 1970-1974, whereas the indirect estimates indicate a decreasing trend in infant mortality rates, converging with the direct estimates in more recent years. In view of increased efforts to improve maternal and child health in the mid-1970s, the decreasing trend is plausible. It is impossible to estimate accurately infant mortality in the Marshall Islands during 1980-1984 from the available data. Estimates based on registration data for 1975-1979 are at least 40% too low. The authors speculate that the estimate of 33 deaths per 1000 live births obtained from registration data for 1984 is 40-50% too low. In round figures, a value of 60 deaths per 1000 may be taken as the final estimate for 1980-1984.
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
Willingness to pay per quality-adjusted life year for life-saving treatments in Thailand
Nimdet, Khachapon; Ngorsuraches, Surachat
2015-01-01
Objective To estimate the willingness to pay (WTP) per quality-adjusted life year (QALY) value for life-saving treatments and to determine factors affecting the WTP per QALY value. Design A cross-sectional survey with multistage sampling and face-to-face interviews. Setting General population in the southern part of Thailand. Participants A total of 600 individuals were included in the study. Only 554 (92.3%) responses were usable for data analyses. Outcome measure Participants were asked for the maximum amount of WTP value for life-saving treatments by an open-ended question. EQ-5D-3L and visual analogue scale (VAS) were used to estimate additional QALY. Results The amount of WTP values varied from 0 to 720 000 Baht/year (approximately 32 Baht=US$1). The averages of additional QALY obtained from VAS and EQ-5D-3L were only slightly different (0.872 and 0.853, respectively). The averages of WTP per QALY obtained from VAS and EQ-5D-3L were 244720 and 243120 Baht/QALY, respectively. As compared to male participants, female participants were more likely to pay less for an additional QALY (p=0.007). In addition, participants with higher household incomes tended to have higher WTP per QALY values (p<0.001). Conclusions Our study added another WTP per QALY value specifically for life-saving treatments, which would complement the current cost-effectiveness threshold used in Thailand and optimise patient access to innovative treatments or technologies. PMID:26438135
Selective Laser Melting of Pure Copper
NASA Astrophysics Data System (ADS)
Ikeshoji, Toshi-Taka; Nakamura, Kazuya; Yonehara, Makiko; Imai, Ken; Kyogoku, Hideki
2017-12-01
Appropriate building parameters for selective laser melting of 99.9% pure copper powder were investigated at relatively high laser power of 800 W for hatch pitch in the range from 0.025 mm to 0.12 mm. The highest relative density of the built material was 99.6%, obtained at hatch pitch of 0.10 mm. Building conditions were also studied using transient heat analysis in finite element modeling of the liquidation and solidification of the powder layer. The estimated melt pool length and width were comparable to values obtained by observations using a thermoviewer. The trend for the melt pool width versus the hatch pitch agreed with experimental values.
Selective Laser Melting of Pure Copper
NASA Astrophysics Data System (ADS)
Ikeshoji, Toshi-Taka; Nakamura, Kazuya; Yonehara, Makiko; Imai, Ken; Kyogoku, Hideki
2018-03-01
Appropriate building parameters for selective laser melting of 99.9% pure copper powder were investigated at relatively high laser power of 800 W for hatch pitch in the range from 0.025 mm to 0.12 mm. The highest relative density of the built material was 99.6%, obtained at hatch pitch of 0.10 mm. Building conditions were also studied using transient heat analysis in finite element modeling of the liquidation and solidification of the powder layer. The estimated melt pool length and width were comparable to values obtained by observations using a thermoviewer. The trend for the melt pool width versus the hatch pitch agreed with experimental values.
Starling, M R; Gross, M D; Walsh, R A; Dell'Italia, L J; Montgomery, D G; Squicciarini, S A; Blumhardt, R
1988-08-01
This investigation was designed to determine whether left ventricular (LV) maximum time-varying elastance (Emax) calculations obtained using equilibrium radionuclide angiography (RNA) were comparable to those obtained using biplane contrast cineangiography (CINE), and whether simple, indirect P-V relations might provide reasonable, alternative estimates of Emax. Accordingly, we studied 19 patients with simultaneous high-fidelity micromanometer LV and fluid brachial artery (Ba) pressure recordings, CINE, and RNA under control conditions and during methoxamine and nitroprusside infusions. Emax was defined for CINE and RNA as the maximum slope of the linear relation of isochronal, instantaneous P-V data points obtained from each of the three loading conditions. The indirect P-V relations were similarly obtained from Ba peak (P) pressure versus minimum RNA LV volume (BaP/minV) and Ba dicrotic notch (di) pressure versus minimum RNA LV volume (Badi/minV) data points. The mean heart rates and LV (+)dP/dtmax values were minimally altered during the three loading conditions. The isochronal Emax values ranged from 1.40 to 6.73 mmHg/ml (mean 4.13 +/- 1.99 s.d. mmHg/ml) for CINE and from 1.48 to 7.25 (mean 4.35 +/- 1.81 mmHg/ml) for RNA (p = N.S.). Similarly, the unstressed volumes ranged from -10 to 80 ml (mean 30 +/- 23 ml) for CINE and from -8 to 77 ml (29 +/- 21 ml) for RNA (p = N.S.). The individual, isochronal Emax values by RNA correlated with those by CINE (r = 0.86). In 14 of the 19 patients, the BaP/minV and Badi/minV relations correlated with the isochronal Emax values calculated by RNA (r = 0.83 and 0.82, respectively), and these relations also correlated with the Emax values calculated by CINE (r = 0.82 and 0.78, respectively). The slope and V0 values for the BaP/minV and Badi/minV relations underestimated those for Emax by RNA and CINE (p less than 0.01 and p less than 0.05, respectively, for both). Thus, the isochronal Emax values calculated using RNA are comparable to those obtained using CINE in man. Moreover, indirect P-V relations underestimate these Emax values, but they are linearly related with the isochronal Emax values calculated by RNA and CINE. Consequently, these indirect P-V relations may provide a more simple, alternative estimate of LV contractile function in man.
Herrero, José Ignacio; Iñarrairaegui, Mercedes; D'Avola, Delia; Sangro, Bruno; Prieto, Jesús; Quiroga, Jorge
2014-04-01
The FibroScan(®) XL probe has been specifically designed for obese patients to measure liver stiffness by transient elastography, but it has not been well tested in non-obese patients. The aim of this study was to compare the M and XL FibroScan(®) probes in a series of unselected obese (body mass index above 30 kg/m(2)) and non-obese patients with chronic liver disease. Two hundred and fifty-four patients underwent a transient elastography examination with both the M and XL probes. The results obtained with the two probes were compared in the whole series and in obese (n=82) and non-obese (n=167) patients separately. The reliability of the examinations was assessed using the criteria defined by Castéra et al. The proportion of reliable exams was significantly higher when the XL probe was used (83% versus 73%; P=.001). This significance was maintained in the group of obese patients (82% versus 55%; P<.001), but not in the non-obese patients (84% versus 83%). Despite a high correlation between the stiffness values obtained with the two probes (R=.897; P<.001), and a high concordance in the estimation of fibrosis obtained with the two probes (Cronbach's alpha value: 0.932), the liver stiffness values obtained with the XL probe were significantly lower than those obtained with the M probe, both in the whole series (9.5 ± 9.1 kPa versus 11.3 ± 12.6 kPa; P<0.001) and in the obese and non-obese groups. In conclusion, transient elastography with the XL probe allows a higher proportion of reliable examinations in obese patients but not in non-obese patients. Stiffness values were lower with the XL probe than with the M probe. Copyright © 2013 Elsevier España, S.L. and AEEH y AEG. All rights reserved.
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Aerodynamic parameters of High-Angle-of attack Research Vehicle (HARV) estimated from flight data
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Ratvasky, Thomas R.; Cobleigh, Brent R.
1990-01-01
Aerodynamic parameters of the High-Angle-of-Attack Research Aircraft (HARV) were estimated from flight data at different values of the angle of attack between 10 degrees and 50 degrees. The main part of the data was obtained from small amplitude longitudinal and lateral maneuvers. A small number of large amplitude maneuvers was also used in the estimation. The measured data were first checked for their compatibility. It was found that the accuracy of air data was degraded by unexplained bias errors. Then, the data were analyzed by a stepwise regression method for obtaining a structure of aerodynamic model equations and least squares parameter estimates. Because of high data collinearity in several maneuvers, some of the longitudinal and all lateral maneuvers were reanalyzed by using two biased estimation techniques, the principal components regression and mixed estimation. The estimated parameters in the form of stability and control derivatives, and aerodynamic coefficients were plotted against the angle of attack and compared with the wind tunnel measurements. The influential parameters are, in general, estimated with acceptable accuracy and most of them are in agreement with wind tunnel results. The simulated responses of the aircraft showed good prediction capabilities of the resulting model.
Elastic thickness estimates at northeast passive margin of North America and its implications
NASA Astrophysics Data System (ADS)
Kumar, R. T. Ratheesh; Maji, Tanmay K.; Kandpal, Suresh Ch; Sengupta, D.; Nair, Rajesh R.
2011-06-01
Global estimates of the elastic thickness (Te) of the structure of passive continental margins show wide and varying results owing to the use of different methodologies. Earlier estimates of the elastic thickness of the North Atlantic passive continental margins that used flexural modelling yielded a Te value of ~20-100 km. Here, we compare these estimates with the Te value obtained using orthonormalized Hermite multitaper recovered isostatic coherence functions. We discuss how Te is correlated with heat flow distribution and depth of necking. The E-W segment in the southern study region comprising Nova Scotia and the Southern Grand Banks show low Te values, while the zones comprising the NE-SW zones, viz., Western Greenland, Labrador, Orphan Basin and the Northern Grand Bank show comparatively high Te values. As expected, Te broadly reflects the depth of the 200-400°C isotherm below the weak surface sediment layer at the time of loading, and at the margins most of the loading occurred during rifting. We infer that these low Te measurements indicate Te frozen into the lithosphere. This could be due to the passive nature of the margin when the loads were emplaced during the continental break-up process at high temperature gradients.
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
NASA Astrophysics Data System (ADS)
Piskorski, K.; Passi, V.; Ruhkopf, J.; Lemme, M. C.; Przewlocki, H. M.
2018-05-01
We report on the advantages of using Graphene-Insulator-Semiconductor (GIS) instead of Metal-Insulator-Semiconductor (MIS) structures in reliable and precise photoelectric determination of the band alignment at the semiconductor-insulator interface and of the insulator band gap determination. Due to the high transparency to light of the graphene gate in GIS structures large photocurrents due to emission of both electrons and holes from the substrate and negligible photocurrents due to emission of carriers from the gate can be obtained, which allows reliable determination of barrier heights for both electrons, Ee and holes, Eh from the semiconductor substrate. Knowing the values of both Ee and Eh allows direct determination of the insulator band gap EG(I). Photoelectric measurements were made of a series of Graphene-SiO2-Si structures and an example is shown of the results obtained in sequential measurements of the same structure giving the following barrier height values: Ee = 4.34 ± 0.01 eV and Eh = 4.70 ± 0.03 eV. Based on this result and results obtained for other structures in the series we conservatively estimate the maximum uncertainty of both barrier heights estimations at ± 0.05 eV. This sets the SiO2 band gap estimation at EG(I) = 7.92 ± 0.1 eV. It is shown that widely different SiO2 band gap values were found by research groups using various determination methods. We hypothesize that these differences are due to different sensitivities of measurement methods used to the existence of the SiO2 valence band tail.
Energy Expenditure in Critically Ill Elderly Patients: Indirect Calorimetry vs Predictive Equations.
Segadilha, Nara L A L; Rocha, Eduardo E M; Tanaka, Lilian M S; Gomes, Karla L P; Espinoza, Rodolfo E A; Peres, Wilza A F
2017-07-01
Predictive equations (PEs) are used for estimating resting energy expenditure (REE) when the measurements obtained from indirect calorimetry (IC) are not available. This study evaluated the degree of agreement and the accuracy between the REE measured by IC (REE-IC) and REE estimated by PE (REE-PE) in mechanically ventilated elderly patients admitted to the intensive care unit (ICU). REE-IC of 97 critically ill elderly patients was compared with REE-PE by 6 PEs: Harris and Benedict (HB) multiplied by the correction factor of 1.2; European Society for Clinical Nutrition and Metabolism (ESPEN) using the minimum (ESPENmi), average (ESPENme), and maximum (ESPENma) values; Mifflin-St Jeor; Ireton-Jones (IJ); Fredrix; and Lührmann. Degree of agreement between REE-PE and REE-IC was analyzed by the interclass correlation coefficient and the Bland-Altman test. The accuracy was calculated by the percentage of male and/or female patients whose REE-PE values differ by up to ±10% in relation to REE-IC. For both sexes, there was no difference for average REE-IC in kcal/kg when the values obtained with REE-PE by corrected HB and ESPENme were compared. A high level of agreement was demonstrated by corrected HB for both sexes, with greater accuracy for women. The best accuracy in the male group was obtained with the IJ equation but with a low level of agreement. The effectiveness of PEs is limited for estimating REE of critically ill elderly patients. Nonetheless, HB multiplied by a correction factor of 1.2 can be used until a specific PE for this group of patients is developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behnke, A.R.; Taylor, W.L.
Whole-body density determinations are reported for a small group of athletes (weight lifters), and an analysis is presented of data derived from several investigations in which similar techniques were employed to measure total body water and the total exchangeable sodium, chloride (bromine space), and potassium in the body. The mean value for body density (1.080) obtained on the athletes was similar to that obtained previously on professional football players and much higher than the mean value usually obtained on young men (-- 1.060). In addition to the low body fat content characteristic of the athletes, the ratio of excha&eable K/submore » e/ to Cl/sub e/ was higher in these men than in men of average physique. In turn, the values for K/sub e/Cl/sub e/ were even lower in obese individuals and in patients. In healthy individuals, the sum (K/sub e/ + Cl/sub e/) is highly correlated (r = 0.99) with total body water, and this finding provides an independent estimate of lean body weight. In patients afflicted with certain types of chronic diseases, particularly those associated with the edematous state, the exchangeable Na/sub e/ to K/sub e/ ratio is strikingly higher than it is in healthy individuals. Estimates of the amount of transudate in edematous patients may be made from analyses of total body water andd total exchangeable Na/sub e/ and K/sub e/. Additional determinations, such as whole body density and red cell mass, are required to assess accurately the size of the lean body mass in these patients. Normal adult lean body size prior to illness may be estimated from skeletal measurements. (auth)« less
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Santos, R M C; do Rêgo, E R; Borém, A; Nascimento, M F; Nascimento, N F F; Finger, F L; Rêgo, M M
2014-10-31
Two accessions of ornamental pepper Capsicum annuum L., differing in most of the characters studied, were crossed, resulting in the F1 generation, and the F2 generation was obtained through self-fertilization of the F1 generation. The backcross generations RC1 and RC2 were obtained through crossing between F1 and the parents P1 and P2, respectively. Morpho-agronomic characterization was performed based on the 19 quantitative descriptors of Capsicum. The data obtained were subjected to generation analysis, in which the means and additive variance (σa(2)), variance due to dominance deviation (σd(2)), phenotypic variance (σf(2)), genetic variance (σg(2)) and environmental variance (σm(2)) were calculated. For the full model, we estimated the mean effects of all possible homozygotes, additives, dominants, and epistatics: additive-additive, additive-dominant, and dominant-dominant. For the additive-dominant model, we estimated the additive effects, dominant effects and mean effects of possible homozygotes. The character fruit dry matter had the lowest value for broad sense heritability (0.42), and the highest values were found for fresh matter and fruit weight, 0.91 and 0.92, respectively. The lowest value for narrow sense heritability was for the minor fruit diameter character (0.33), and the highest values were found for seed yield per fruit and fresh matter, 0.87 and 0.84, respectively. The additive-dominant model explained only the variation found in plant height, canopy width, stem length, corolla diameter, leaf width, and pedicel length, but in the other characters, the epistatic effects showed significant values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com; Institut Matematik Kejuruteraan; Saaban, Azizan, E-mail: azizan.s@uum.edu.my
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputedmore » data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.« less
NASA Astrophysics Data System (ADS)
Berlanga, Juan M.; Harbaugh, John W.
The Tabasco region contains a number of major oilfields, including some of the emerging "giant" oil fields which have received extensive publicity. Fields in the Tabasco region are associated with large geologic structures which are detected readily by seismic surveys. The structures seem to be associated with deepseated movement of salt, and they are complexly faulted. Some structures have as much as 1000 milliseconds relief of seismic lines. A study, interpreting the structure of the area, used initially only a fraction of the total seismic lines That part of Tabasco region that has been studied was surveyed with a close-spaced rectilinear network of seismic lines. A, interpreting the structure of the area, used initially only a fraction of the total seismic data available. The purpose was to compare "predictions" of reflection time based on widely spaced seismic lines, with "results" obtained along more closely spaced lines. This process of comparison simulates the sequence of events in which a reconnaissance network of seismic lines is used to guide a succession of progressively more closely spaced lines. A square gridwork was established with lines spaced at 10 km intervals, and using machine contour maps, compared the results with those obtained with seismic grids employing spacings of 5 and 2.5 km respectively. The comparisons of predictions based on widely spaced lines with observations along closely spaced lines provide information by which an error function can be established. The error at any point can be defined as the difference between the predicted value for that point, and the subsequently observed value at that point. Residuals obtained by fitting third-degree polynomial trend surfaces were used for comparison. The root mean square of the error measurement, (expressed in seconds or milliseconds reflection time) was found to increase more or less linearly with distance from the nearest seismic point. Oil-occurrence probabilities were established on the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.
NASA Astrophysics Data System (ADS)
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T
2016-05-01
We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
Choice of Reference Serum Creatinine in Defining Acute Kidney Injury.
Siew, Edward D; Matheny, Michael E
2015-01-01
The study of acute kidney injury (AKI) has expanded with the increasing availability of electronic health records and the use of standardized definitions. Understanding the impact of AKI between settings is limited by heterogeneity in the selection of reference creatinine to anchor the definition of AKI. In this mini-review, we discuss different approaches used to select reference creatinine and their relative merits and limitations. We reviewed the literature to obtain representative examples of published baseline creatinine definitions when pre-hospital data were not available, as well as literature evaluating the estimation of baseline renal function, using PubMed and reference back-tracing within known works. (1) Pre-hospital creatinine values are useful in determining reference creatinine, and in high-risk populations, the mean outpatient serum creatinine value 7-365 days before hospitalization closely approximates nephrology adjudication, (2) in patients without pre-hospital data, the eGFR 75 approach does not reliably estimate true AKI incidence in most at-risk populations, (3) using the lowest inpatient serum creatinine may be reasonable, especially in those with preserved kidney function, but may generously estimate AKI incidence and severity and miss community-acquired AKI that does not fully resolve, (4) using more specific definitions of AKI (e.g., KIDGO stages 2 and 3) may help to reduce the effects of misclassification when using surrogate values and (5) leveraging available clinical data may help refine the estimate of reference creatinine. Choosing reference creatinine for AKI calculation is important for AKI classification and study interpretation. We recommend obtaining data on pre-hospital kidney function, wherever possible. In studies where surrogate estimates are used, transparency in how they are applied and discussion that informs the reader of potential biases should be provided. Further work to refine the estimation of reference creatinine is needed. © 2015 S. Karger AG, Basel.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Starks, Patrick J.; Norman, John M.; Blad, Blaine L.; Walter-Shea, Elizabeth A.; Walthall, Charles L.
1991-01-01
An equation for estimating albedo from bidirectional reflectance data is proposed. The estimates of albedo are found to be greater than values obtained with simultaneous pyranometer measurements. Particular attention is given to potential sources of systematic errors including extrapolation of bidirectional reflectance data out to a view zenith angle of 90 deg, the use of inappropriate weighting coefficients in the numerator of the albedo equation, surface shadowing caused by the A-frame instrumentation used to measure the incoming and outgoing radiation fluxes, errors in estimates of the denominator of the proposed albedo equation, and a 'hot spot' contribution in bidirectional data measured by a modular multiband radiometer.
Ribeiro, T; Depres, S; Couteau, G; Pauss, A
2003-01-01
An alternative method for the estimation of nitrate and nitrogen forms in vegetables is proposed. Nitrate can be directly estimated by UV-spectrophotometry after an extraction step with water. The other nitrogen compounds are photo-oxidized into nitrate, and then estimated by UV-spectrophotometry. An oxidative solution of sodium persulfate and a Hg-UV lamp is used. Preliminary assays were realized with vegetables like salade, spinachs, artichokes, small peas, broccolis, carrots, watercress; acceptable correlations between expected and experimental values of nitrate amounts were obtained, while the detection limit needs to be lowered. The optimization of the method is underway.
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Kukharsky, Alexander; Uspensky, Sergey
2015-04-01
To date, physical-mathematical modeling processes of land surface-atmosphere interaction is considered to be the most appropriate tool for obtaining reliable estimates of water and heat balance components of large territories. The model of these processes (Land Surface Model, LSM) developed for vegetation period is destined for simulating soil water content W, evapotranspiration Ev, vertical latent LE and heat fluxes from land surface as well as vertically distributed soil temperature and moisture, soil surface Tg and foliage Tf temperatures, and land surface skin temperature (LST) Ts. The model is suitable for utilizing remote sensing data on land surface and meteorological conditions. In the study these data have been obtained from measurements by scanning radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/geostationary satellites Meteosat-9, -10 (MSG-2, -3). The heterogeneity of the land surface and meteorological conditions has been taken into account in the model by using soil and vegetation characteristics as parameters and meteorological characteristics as input variables. Values of these characteristics have been determined from ground observations and remote sensing information. So, AVHRR data have been used to build the estimates of effective land surface temperature (LST) Ts.eff and emissivity E, vegetation-air temperature (temperature at the vegetation level) Ta, normalized vegetation index NDVI, vegetation cover fraction B, the leaf area index LAI, and precipitation. From MODIS data the values of LST Tls, Å, NDVI, LAI have been derived. From SEVIRI data there have been retrieved Tls, E, Ta, NDVI, LAI and precipitation. All named retrievals covered the vast territory of the part of the agricultural Central Black Earth Region located in the steppe-forest zone of European Russia. This territory with coordinates 49°30'-54°N, 31°-43°E and a total area of 227,300 km2 has been chosen for investigation. It has been carried out for years 2009-2013 vegetation seasons. To provide the retrieval of Ts.eff, E, Ta, NDVI, B, and LAI the previously developed technologies of AVHRR data processing have been refined and adapted to the region of interest. The updated linear regression estimators for Ts.eff and Tà have been built using representative training samples compiled for above vegetation seasons. The updated software package has been applied for AVHRR data processing to generate estimates of named values. To verify the accuracy of these estimates the error statistics of Ts.eff and Ta derivation has been investigated for various days of named seasons using comparison with in-situ ground-based measurements. On the base of special technology and Internet resources the remote sensing products Tls, E, NDVI, LAI derived from MODIS data and covering the study area have been extracted from LP DAAC web-site for the same vegetation seasons. The reliability of the MODIS-derived Tls estimates has been confirmed via comparison with analogous and collocated ground-, AVHRR-, and SEVIRI-based ones. The prepared remote sensing dataset has also included the SEVIRI-derived estimates of Tls, E, NDVI, Ta at daylight and night-time and daily estimates of LAI. The Tls estimates has been built utilizing the method and technology developed for the retrieval of Tls and E from 15 minutes time interval SEVIRI data in IR channels 10.8 and 12.0 µm (classified as 100% cloud-free and covering the area of interest) at three successive times without accurate a priori knowledge of E. Comparison of the SEVIRI-based Tls retrievals with independent collocated Tls estimates generated at the Land Surface Analysis Satellite Applications Facility (LSA SAF, Lisbon, Portugal) has given daily- or monthly-averaged values of RMS deviation in the range of 2°C for various dates and months during the mentioned vegetation seasons which is quite acceptable result. The reliability of the SEVIRI-based Tls estimates for the study area has been also confirmed by comparing with AVHRR- and MODIS-derived LST estimates for the same seasons. The SEVIRI-derived values of Ta considered as the temperature of the vegetation cover has been obtained using Tls estimates and a previously found multiple linear regression relationship between Tls and Ta formulated accounting for solar zenith angle and land elevation. A comparison with ground-based collocated Ta observations has given RMS errors of 2.5°C and lower. It can be treated as a proof of the proposed technique's functionality. SEVIRI-derived LAI estimates have been retrieved at LSA SAF from measurements by this sensor in channels 0.6, 0.8, and 1.6 μm under cloud-free conditions at that when using data in the channel 1.6 μm the accuracy of these estimates has increased. In the study the AVHRR- and SEVIRI-derived estimates of daily and monthly precipitation sums for the territory under investigation for the years 2009 - 2013 vegetation seasons have been also used. These estimates have been obtained by the improved integrated Multi Threshold Method (MTM) providing detection and identification of cloud types around the clock throughout the year as well as identification of precipitation zones and determination of instantaneous precipitation maximum intensity within the pixel using the measurement data in different channels of named sensors as predictors. Validation of the MTM has been performed by comparing the daily and monthly precipitation sums with appropriate values resulted from ground-based observations at the meteorological stations of the region. The probability of detecting precipitation zones from satellite data corresponding to the actual ones has been amounted to 70-80%. AVHRR- and SEVIRI-derived daily and monthly precipitation sums have been in reasonable agreement with each other and with results of ground-based observations although they are smoother than the last values. Discrepancies have been noted only for local maxima for which satellite-based estimates of precipitation have been much less than ground-based ones. It may be due to the different spatial scales of areal satellite-derived and point ground-based estimates. To utilize satellite-derived vegetation and meteorological characteristics in the model the special procedures have been developed including: - replacement of ground-based LAI and B estimates used as model parameters by their satellite-derived estimates from AVHRR, MODIS and SEVIRI data. Correctness of such replacement has been confirmed by comparing the time behavior of LAI over the period of vegetation as well as modeled and measured values of evapotranspiration Ev and soil moisture content W; - entering AVHRR-, MODIS- and SEVIRI-derived estimates of Ts.eff Tls, and Ta into the model as input variables instead of ground-measured values with verification of adequacy of model operation under such a change through comparison of the calculated and measured values of W and Ev; - inputing satellite-derived estimates of precipitation during vegetation period retrieved from AVHRR and SEVIRI data using the MTM into the model as input variables. When developing given procedure algorithms and programs have been created to transit from assessment of the rainfall intensity to evaluation of its daily values. The implementation of such a transition requires controlling correctness of the estimates built at each time step. This control includes comparison of areal distributions of three-hour, daily and monthly precipitation amounts obtained from satellite data and calculated by interpolation of standard network observation data; - taking into account spatial heterogeneity of fields of satellite AVHRR-, MODIS- and SEVIRI-derived estimates of LAI, B, LST and precipitation. This has involved the development of algorithms and software for entering the values of all named characteristics into the model in each computational grid node. Values of evapotranspiration E, soil water content W, vertical latent and sensible heat fluxes and other water and heat balance components as well as land surface temperature and moisture area-distributed over the territory of interest have been resulted from the model calculations for the years 2009-2013 vegetation seasons. These calculations have been carried out utilizing satellite-derived estimates of the vegetation characteristics, LST and precipitation. E and W calculation errors have not exceeded the standard values.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Manzanares-Laya, S; Burón, A; Murta-Nascimento, C; Servitja, S; Castells, X; Macià, F
2014-01-01
Hospital cancer registries and hospital databases are valuable and efficient sources of information for research into cancer recurrences. The aim of this study was to develop and validate algorithms for the detection of breast cancer recurrence. A retrospective observational study was conducted on breast cancer cases from the cancer registry of a third level university hospital diagnosed between 2003 and 2009. Different probable cancer recurrence algorithms were obtained by linking the hospital databases and the construction of several operational definitions, with their corresponding sensitivity, specificity, positive predictive value and negative predictive value. A total of 1,523 patients were diagnosed of breast cancer between 2003 and 2009. A request for bone gammagraphy after 6 months from the first oncological treatment showed the highest sensitivity (53.8%) and negative predictive value (93.8%), and a pathology test after 6 months after the diagnosis showed the highest specificity (93.8%) and negative predictive value (92.6%). The combination of different definitions increased the specificity and the positive predictive value, but decreased the sensitivity. Several diagnostic algorithms were obtained, and the different definitions could be useful depending on the interest and resources of the researcher. A higher positive predictive value could be interesting for a quick estimation of the number of cases, and a higher negative predictive value for a more exact estimation if more resources are available. It is a versatile and adaptable tool for other types of tumors, as well as for the needs of the researcher. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Takahashi, Masaki
2014-12-01
In this study, we propose a new attitude determination system, which we call Irradiance-based Attitude Determination (IRAD). IRAD employs the characteristics and geometry of solar panels. First, the sun vector is estimated using data from solar panels including current, voltage, temperature, and the normal vectors of each solar panel. Because these values are obtained using internal sensors, it is easy for rovers to provide redundancy for IRAD. The normal vectors are used to apply to various shapes of rovers. Second, using the gravity vector obtained from an accelerometer, the attitude of a rover is estimated using a three-axis attitude determination method. The effectiveness of IRAD is verified through numerical simulations and experiments that show IRAD can estimate all the attitude angles (roll, pitch, and yaw) within a few degrees of accuracy, which is adequate for planetary explorations.
The variance of the locally measured Hubble parameter explained with different estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk
We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less
Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia.
Kostoglou, Kyriaki; Debert, Chantel T; Poulin, Marc J; Mitsis, Georgios D
2014-05-01
We examined the time-varying characteristics of cerebral autoregulation and hemodynamics during a step hypercapnic stimulus by using recursively estimated multivariate (two-input) models which quantify the dynamic effects of mean arterial blood pressure (ABP) and end-tidal CO2 tension (PETCO2) on middle cerebral artery blood flow velocity (CBFV). Beat-to-beat values of ABP and CBFV, as well as breath-to-breath values of PETCO2 during baseline and sustained euoxic hypercapnia were obtained in 8 female subjects. The multiple-input, single-output models used were based on the Laguerre expansion technique, and their parameters were updated using recursive least squares with multiple forgetting factors. The results reveal the presence of nonstationarities that confirm previously reported effects of hypercapnia on autoregulation, i.e. a decrease in the MABP phase lead, and suggest that the incorporation of PETCO2 as an additional model input yields less time-varying estimates of dynamic pressure autoregulation obtained from single-input (ABP-CBFV) models. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models
NASA Astrophysics Data System (ADS)
Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin
2016-04-01
This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.
Fuster, Casilda Olveira; Fuster, Gabriel Olveira; Galindo, Antonio Dorado; Galo, Alicia Padilla; Verdugo, Julio Merino; Lozano, Francisco Miralles
2007-07-01
Undernutrition, which implies an imbalance between energy intake and energy requirements, is common in patients with cystic fibrosis. The aim of this study was to compare resting energy expenditure determined by indirect calorimetry with that obtained with commonly used predictive equations in adults with cystic fibrosis and to assess the influence of clinical variables on the values obtained. We studied 21 patients with clinically stable cystic fibrosis, obtaining data on anthropometric variables, hand grip dynamometry, electrical bioimpedance, and resting energy expenditure by indirect calorimetry. We used the intraclass correlation coefficient (ICC) and the Bland-Altman method to assess agreement between the values obtained for resting energy expenditure measured by indirect calorimetry and those obtained with the World Health Organization (WHO) and Harris-Benedict prediction equations. The prediction equations underestimated resting energy expenditure in more than 90% of cases. The agreement between the value obtained by indirect calorimetry and that calculated with the prediction equations was poor (ICC for comparisons with the WHO and Harris-Benedict equations, 0.47 and 0.41, respectively). Bland-Altman analysis revealed a variable bias between the results of indirect calorimetry and those obtained with prediction equations, irrespective of the resting energy expenditure. The difference between the values measured by indirect calorimetry and those obtained with the WHO equation was significantly larger in patients homozygous for the DeltaF508 mutation and in those with exocrine pancreatic insufficiency. The WHO and Harris-Benedict prediction equations underestimate resting energy expenditure in adults with cystic fibrosis. There is poor agreement between the values for resting energy expenditure determined by indirect calorimetry and those estimated with prediction equations. Underestimation was greater in patients with exocrine pancreatic insufficiency and patients who were homozygous for DeltaF508.
Torres-Ruiz, José M; Sperry, John S; Fernández, José E
2012-10-01
Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Houston, Natalie A.; Braun, Christopher L.
2004-01-01
This report describes the collection, analyses, and distribution of hydraulic-conductivity data obtained from slug tests completed in the alluvial aquifer underlying Air Force Plant 4 and Naval Air Station-Joint Reserve Base Carswell Field, Fort Worth, Texas, during October 2002 and August 2003 and summarizes previously available hydraulic-conductivity data. The U.S. Geological Survey, in cooperation with the U.S. Air Force, completed 30 slug tests in October 2002 and August 2003 to obtain estimates of horizontal hydraulic conductivity to use as initial values in a ground-water-flow model for the site. The tests were done by placing a polyvinyl-chloride slug of known volume beneath the water level in selected wells, removing the slug, and measuring the resulting water-level recovery over time. The water levels were measured with a pressure transducer and recorded with a data logger. Hydraulic-conductivity values were estimated from an analytical relation between the instantaneous displacement of water in a well bore and the resulting rate of head change. Although nearly two-thirds of the tested wells recovered 90 percent of their slug-induced head change in less than 2 minutes, 90-percent recovery times ranged from 3 seconds to 35 minutes. The estimates of hydraulic conductivity range from 0.2 to 200 feet per day. Eighty-three percent of the estimates are between 1 and 100 feet per day.
Hanigan, Ivan; Hall, Gillian; Dear, Keith B G
2006-09-13
To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Options based on values derived from sites internal to postal areas, or from nearest neighbour sites--that is, using proximity polygons around weather stations intersected with postal areas--tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons), is too limited. The most appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid.
NASA Astrophysics Data System (ADS)
Pegram, Geoff; Bardossy, Andras; Sinclair, Scott
2017-04-01
The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this presentation we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the presentation is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to un-sampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the sub-daily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. In addition, a statistical procedure not based on a matching day by day correction is tested. In this last procedure, as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving 12 days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these 12 day maxima is first interpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest 12 radar based days in each year. Of course, the timings of radar and gauge maxima can be different, so the new method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense [10 km spacing] set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer, not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable. Published as: Bárdossy, A., and G. G. S. Pegram (2017) Journal of Hydrology, Volume 544, pp 397-406
Dynamical Analysis of an SEIT Epidemic Model with Application to Ebola Virus Transmission in Guinea.
Li, Zhiming; Teng, Zhidong; Feng, Xiaomei; Li, Yingke; Zhang, Huiguo
2015-01-01
In order to investigate the transmission mechanism of the infectious individual with Ebola virus, we establish an SEIT (susceptible, exposed in the latent period, infectious, and treated/recovery) epidemic model. The basic reproduction number is defined. The mathematical analysis on the existence and stability of the disease-free equilibrium and endemic equilibrium is given. As the applications of the model, we use the recognized infectious and death cases in Guinea to estimate parameters of the model by the least square method. With suitable parameter values, we obtain the estimated value of the basic reproduction number and analyze the sensitivity and uncertainty property by partial rank correlation coefficients.
The effect of surface anisotropy and viewing geometry on the estimation of NDVI from AVHRR
Meyer, David; Verstraete, M.; Pinty, B.
1995-01-01
Since terrestrial surfaces are anisotropic, all spectral reflectance measurements obtained with a small instantaneous field of view instrument are specific to these angular conditions, and the value of the corresponding NDVI, computed from these bidirectional reflectances, is relative to the particular geometry of illumination and viewing at the time of the measurement. This paper documents the importance of these geometric effects through simulations of the AVHRR data acquisition process, and investigates the systematic biases that result from the combination of ecosystem-specific anisotropies with instrument-specific sampling capabilities. Typical errors in the value of NDVI are estimated, and strategies to reduce these effects are explored. -from Authors
Rye Canyon X-ray noise test: One-third octave-band data
NASA Technical Reports Server (NTRS)
Willshire, W. L., Jr.
1983-01-01
Acoustic data were obtained for the 25 ft. diameter X-wing rotor model during performance testing of the rotor system in hover. Data collected at the outdoor whirl tower test facility with a twelve microphone array were taken for approximately 150 test conditions comprised of various combinations of RPM, blade pressure ratio (BPR), and blade angle of attack (collective). The three test parameters had four values of RPM from 404 to 497, twelve values of BPR from 1.0 to 2.1, and six values of collective from 0.0 deg to 8.5 deg. Fifteen to twenty seconds of acoustic data were reduced to obtain an average 1/3 octave band spectrum for each microphone for each test condition. The complete, as measured, 1/3 octave band results for all the acoustic data are listed. Another part of the X-wing noise test was the acoustic calibration of the Rye Canyon whirl tower bowl. Corrections were computed which, when applied to as measured data, yield estimates of the free field X-wing noise. The free field estimates provide a more realistic measure of the rotor system noise levels. Trend analysis of the three test parameters on noise level were performed.
NASA Astrophysics Data System (ADS)
Kaneko, Masashi; Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru
2017-11-01
The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for 99Ru and 189Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both 99Ru and 189Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of Δ R/ R, which is an important nuclear constant, for 99Ru and 189Os nuclides by using the benchmark results. The sign of the calculated Δ R/ R values is consistent with the predicted data for 99Ru and 189Os. We obtain computationally the Δ R/ R values of 99Ru and 189Os (36.2 keV) as 2.35×10-4 and -0.20×10-4, respectively, at B3LYP level for SARC basis set.
Coveney, V A; Gepi-Attee, S; Gröver, D; Painter, D
2001-01-01
Tests have been performed on animal models shortly post-mortem and on a healthy human subject in order to obtain estimates of the forces which act on suprapubic urinary catheters and similar devices and to develop an abdominal wall simulator. Such data and test methods are required for the systematic design of suprapubic devices because of the dual need to maintain the functionality of devices and to avoid excessive pressure on soft body tissue which could lead to ischaemia and in turn necrosis. In the post-mortem animal models, electrical excitation was applied to the abdominal wall in order to stimulate muscle activity. Two types of transducers were used: a soft membrane transducer (SMT) for pressure measurement and novel instrumented 'tongs' to determine indentation stiffness characteristics in the suprapubic track or artificial pathway created for a device. The SMT has been extensively used in the urethras and bladders of human subjects while the tongs were built specifically for these tests. Only the well-established SMT was used with the human subject; a peak pressure of 22 kPa was obtained. In the animal models the pressure profile given by the SMT had a peak whose position corresponded well with the estimated location of the rectus muscle measured on the fixed tissue section. The peak value was 5.5 kPa, comparable with values likely to cause necrosis if maintained for more than 1 day. Remarkably consistent indentation stiffness values were obtained with the instrumented tongs; all values were close to 0.45 N/mm (33 kPa/mm).
Cost-effectiveness of screening for asymptomatic carotid atherosclerotic disease.
Derdeyn, C P; Powers, W J
1996-11-01
The value of screening for asymptomatic carotid stenosis has become an important issue with the recently reported beneficial effect of endarterectomy. The purpose of this study is to evaluate the cost-effectiveness of using Doppler ultrasound as a screening tool to select subjects for arteriography and subsequent surgery. A computer model was developed to simulate the cost-effectiveness of screening a cohort of 1000 men during a 20-year period. The primary outcome measure was incremental present-value dollar expenditures for screening and treatment per incremental present-value quality-adjusted life-year (QALY) saved. Estimates of disease prevalence and arteriographic and surgical complication rates were obtained from the literature. Probabilities of stroke and death with surgical and medical treatment were obtained from published clinical trials. Doppler ultrasound sensitivity and specificity were obtained through review of local experience. Estimates of costs were obtained from local Medicare reimbursement data. A one-time screening program of a population with a high prevalence (20%) of > or = 60% stenosis cost $35130 per incremental QALY gained. Decreased surgical benefit or increased annual discount rate was detrimental, resulting in lost QALYs. Annual screening cost $457773 per incremental QALY gained. In a low-prevalence (4%) population, one-time screening cost $52588 per QALY gained, while annual screening was detrimental. The cost-effectiveness of a one-time screening program for an asymptomatic population with a high prevalence of carotid stenosis may be cost-effective. Annual screening is detrimental. The most sensitive variables in this simulation model were long-term stroke risk reduction after surgery and annual discount rate for accumulated costs and QALYs.
Quantification of idiopathic pulmonary fibrosis using computed tomography and histology.
Coxson, H O; Hogg, J C; Mayo, J R; Behzad, H; Whittall, K P; Schwartz, D A; Hartley, P G; Galvin, J R; Wilson, J S; Hunninghake, G W
1997-05-01
We used computed tomography (CT) and histologic analysis to quantify lung structure in idiopathic pulmonary fibrosis (IPF). CT scans were obtained from IPF and control patients and lung volumes were estimated from measurements of voxel size, and X-ray attenuation values of each voxel. Quantitative estimates of lung structure were obtained from biopsies obtained from diseased and normal CT regions using stereologic methods. CT density was used to calculate the proportion of tissue and air, and this value was used to correct the biopsy specimens to the level of inflation during the CT scan. The data show that IPF is associated with a reduction in airspace volume with no change in tissue volume or weight compared with control lungs. Lung surface area decreased two-thirds (p < 0.001) and mean parenchymal thickness increased tenfold (p < 0.001). An exudate of fluid and cells was present in the airspace of the diseased lung regions and the number of inflammatory cells, collagen, and proteoglycans was increased per 100 g of tissue in IPF. We conclude that IPF reorganized lung tissue content causing a loss of airspace and surface area without increasing the total lung tissue.
An Evaluation of the Bouwer and Rice Method of Slug Test Analysis
NASA Astrophysics Data System (ADS)
Brown, David L.; Narasimhan, T. N.; Demir, Z.
1995-05-01
The method of Bouwer and Rice (1976) for analyzing slug test data is widely used to estimate hydraulic conductivity (K). Based on steady state flow assumptions, this method is specifically intended to be applicable to unconfined aquifers. Therefore it is of practical value to investigate the limits of accuracy of the K estimates obtained with this method. Accordingly, using a numerical model for transient flow, we evaluate the method from two perspectives. First, we apply the method to synthetic slug test data and study the error in estimated values of K. Second, we analyze the logical basis of the method. Parametric studies helped assess the role of the effective radius parameter, specific storage, screen length, and well radius on the estimated values of K. The difference between unconfined and confined systems was studied via conditions on the upper boundary of the flow domain. For the cases studied, the Bouwer and Rice analysis was found to give good estimates of K, with errors ranging from 10% to 100%. We found that the estimates of K were consistently superior to those obtained with Hvorslev's (1951) basic time lag method. In general, the Bouwer and Rice method tends to underestimate K, the greatest errors occurring in the presence of a damaged zone around the well or when the top of the screen is close to the water table. When the top of the screen is far removed from the upper boundary of the system, no difference is manifest between confined and unconfined conditions. It is reasonable to infer from the simulated results that when the screen is close to the upper boundary, the results of the Bouwer and Rice method agree more closely with a "confined" idealization than an "unconfined" idealization. In effect, this method treats the aquifer system as an equivalent radial flow permeameter with an effective radius, Re, which is a function of the flow geometry. Our transient simulations suggest that Re varies with time and specific storage. Thus the effective radius may be reasonably viewed as a time-averaged mean value. The fact that the method provides reasonable estimates of hydraulic conductivity suggests that the empirical, electric analog experiments of Bouwer and Rice have yielded shape factors that are better than the shape factors implicit in the Hvorslev method.
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Adaptive Noise Suppression Using Digital Signal Processing
NASA Technical Reports Server (NTRS)
Kozel, David; Nelson, Richard
1996-01-01
A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.
Auger electron and characteristic energy loss spectra for electro-deposited americium-241
NASA Astrophysics Data System (ADS)
Varma, Matesh N.; Baum, John W.
1983-07-01
Auger electron energy spectra for electro-deposited americium-241 on platinum substrate were obtained using a cylindrical mirror analyzer. Characteristic energy loss spectra for this sample were also obtained at primary electron beam energies of 990 and 390 eV. From these measurements PI, PII, and PIII energy levels for americium-241 are determined. Auger electron energies are compared with theoretically calculated values. Minimum detectability under the present condition of sample preparation and equipment was estimated at approximately 1.2×10-8 g/cm2 or 3.9×10-8 Ci/cm2. Minimum detectability for plutonium-239 under similar conditions was estimated at about 7.2×10-10 Ci/cm2.
Social and economic value of Portuguese community pharmacies in health care.
Félix, Jorge; Ferreira, Diana; Afonso-Silva, Marta; Gomes, Marta Vargas; Ferreira, César; Vandewalle, Björn; Marques, Sara; Mota, Melina; Costa, Suzete; Cary, Maria; Teixeira, Inês; Paulino, Ema; Macedo, Bruno; Barbosa, Carlos Maurício
2017-08-29
Community pharmacies are major contributors to health care systems across the world. Several studies have been conducted to evaluate community pharmacies services in health care. The purpose of this study was to estimate the social and economic benefits of current and potential future community pharmacies services provided by pharmacists in health care in Portugal. The social and economic value of community pharmacies services was estimated through a decision-model. Model inputs included effectiveness data, quality of life (QoL) and health resource consumption, obtained though literature review and adapted to Portuguese reality by an expert panel. The estimated economic value was the result of non-remunerated pharmaceutical services plus health resource consumption potentially avoided. Social and economic value of community pharmacies services derives from the comparison of two scenarios: "with service" versus "without service". It is estimated that current community pharmacies services in Portugal provide a gain in QoL of 8.3% and an economic value of 879.6 million euros (M€), including 342.1 M€ in non-remunerated pharmaceutical services and 448.1 M€ in avoided expense with health resource consumption. Potential future community pharmacies services may provide an additional increase of 6.9% in QoL and be associated with an economic value of 144.8 M€: 120.3 M€ in non-remunerated services and 24.5 M€ in potential savings with health resource consumption. Community pharmacies services provide considerable benefit in QoL and economic value. An increase range of services including a greater integration in primary and secondary care, among other transversal services, may add further social and economic value to the society.
On piecewise interpolation techniques for estimating solar radiation missing values in Kedah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
2014-12-04
This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less
Specializations for reward-guided decision-making in the primate ventral prefrontal cortex.
Murray, Elisabeth A; Rudebeck, Peter H
2018-05-23
The estimated values of choices, and therefore decision-making based on those values, are influenced by both the chance that the chosen items or goods can be obtained (availability) and their current worth (desirability) as well as by the ability to link the estimated values to choices (a process sometimes called credit assignment). In primates, the prefrontal cortex (PFC) has been thought to contribute to each of these processes; however, causal relationships between particular subdivisions of the PFC and specific functions have been difficult to establish. Recent lesion-based research studies have defined the roles of two different parts of the primate PFC - the orbitofrontal cortex (OFC) and the ventral lateral frontal cortex (VLFC) - and their subdivisions in evaluating each of these factors and in mediating credit assignment during reward-based decision-making.
Asymptotic solution of the problem for a thin axisymmetric cavern
NASA Technical Reports Server (NTRS)
Serebriakov, V. V.
1973-01-01
The boundary value problem which describes the axisymmetric separation of the flow around a body by a stationary infinite stream is considered. It is understood that the cavitation number varies over the length of the cavern. Using the asymptotic expansions for the potential of a thin body, the orders of magnitude of terms in the equations of the problem are estimated. Neglecting small quantities, a simplified boundary value problem is obtained.
NASA Astrophysics Data System (ADS)
Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa
2015-10-01
Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5 ) and in vitro canine livers (n=3 ) were tested, as well as HIFU lesions in in vitro canine livers (n=5 ). Results demonstrated that attenuations obtained from the phantoms showed a good correlation ({{R}2}=0.976 ) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32 ± 0.03 dB cm-1 MHz-1, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58 ± 0.06 dB cm-1 MHz-1) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Chen, Jiangang; Hou, Gary Y; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa
2015-10-07
Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n = 5) and in vitro canine livers (n = 3) were tested, as well as HIFU lesions in in vitro canine livers (n = 5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R² = 0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32 ± 0.03 dB cm(-1) MHz(-1), which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58 ± 0.06 dB cm(-1) MHz(-1)) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.
NASA Technical Reports Server (NTRS)
Andrews, J.; Donziger, A.; Hazelrigg, G. A., Jr.; Heiss, K. P.; Sand, F.; Stevenson, P.
1974-01-01
The economic value of an ERS system with a technical capability similar to ERTS, allowing for increased coverage obtained through the use of multiple active satellites in orbit is presented. A detailed breakdown of the benefits achievable from an ERS system is given and a methodology for their estimation is established. The ECON case studies in agriculture, water use, and land cover are described along with the current ERTS system. The cost for a projected ERS system is given.
NASA Technical Reports Server (NTRS)
Wetzler, E.; Peterson, W.; Putnam, M.
1974-01-01
The economic value of an ERTS system in the area of inland water resources management is investigated. Benefits are attributed to new capabilities for managing inland water resources in the field of power generation, agriculture, and urban water supply. These benefits are obtained in the area of equal capability (cost savings) and increased capability (equal budget), and are estimated by applying conservative assumptions to Federal budgeting information, Congressional appropriation hearings, and ERTS technical capabilities.
An assessment of the BEST procedure to estimate the soil water retention curve
NASA Astrophysics Data System (ADS)
Castellini, Mirko; Di Prima, Simone; Iovino, Massimo
2017-04-01
The Beerkan Estimation of Soil Transfer parameters (BEST) procedure represents a very attractive method to accurately and quickly obtain a complete hydraulic characterization of the soil (Lassabatère et al., 2006). However, further investigations are needed to check the prediction reliability of soil water retention curve (Castellini et al., 2016). Four soils with different physical properties (texture, bulk density, porosity and stoniness) were considered in this investigation. Sites of measurement were located at Palermo University (PAL site) and Villabate (VIL site) in Sicily, Arborea (ARB site) in Sardinia and in Foggia (FOG site), Apulia. For a given site, BEST procedure was applied and the water retention curve was estimated using the available BEST-algorithms (i.e., slope, intercept and steady), and the reference values of the infiltration constants (β=0.6 and γ=0.75) were considered. The water retention curves estimated by BEST were then compared with those obtained in laboratory by the evaporation method (Wind, 1968). About ten experiments were carried out with both methods. A sensitivity analysis of the constants β and γ within their feasible range of variability (0.1<β<1.9 and of 0.61<γ< 0.79) was also carried out for each soil in order to establish: i) the impact of infiltration constants in the three BEST-algorithms on saturated hydraulic conductivity, Ks, soil sorptivity, S and on the retention curve scale parameter, hg; ii) the effectiveness of the three BEST-algorithms in the estimate of the soil water retention curve. Main results of sensitivity analysis showed that S tended to increase for increasing β values and decreasing values of γ for all the BEST-algorithms and soils. On the other hand, Ks tended to decrease for increasing β and γ values. Our results also reveal that: i) BEST-intercept and BEST-steady algorithms yield lower S and higher Ks values than BEST-slope; ii) these algorithms yield also more variable values. For the latter, a higher sensitiveness of these two alternative algorithms to β than for γ was established. The decreasing sensitiveness to γ may lead to a possible lack in the correction of the simplified theoretical description of the parabolic two-dimensional and one-dimensional wetting front along the soil profile (Smettem et al., 1994). This likely resulted in lower S and higher Ks values. Nevertheless, these differences are expected to be negligible for practical applications (Di Prima et al., 2016). On the other hand, the -intercept and -steady algorithms yielded hg values independent from γ, hence, determining water retention curves by these algorithms appears questionable. The linear regression between the soil water retention curves of BEST-slope and BEST-intercept (note that the same result is obtained with BEST-steady, due to a purely analytical reason) vs. lab method showed the following main results: i) the BEST procedure generally tends to underestimate the soil water retention (the exception was the PAL site); depending on the soil and algorithmic, the root mean square differences, RMSD obtained with BEST and lab method ranged between 0.028 cm3/cm3 (VIL, BEST-slope) and 0.082 cm3/cm3(FOG, BEST-intercept/steady); highest RMSD values (0.124-0.140 cm3/cm3) were obtained in the PAL site; ii) depending on the soil, BEST-slope generally determined lowest RMSD values (by a factor of 1.2-2.1); iii) when the whole variability range of β and γ was considered and a different couple of parameters was chosen (in general, extreme values of the parameters), lower RMSD values were detected in three out of four cases for BEST-slope; iv) the negligible observed differences of RMSD however suggest that using the reference values of infiltration constants, does not worsen significantly the soil water retention curve estimation; v) in 25% of considered soils (PAL site), the BEST procedure was not able to reproduce the retention curve of the soil in a sufficiently accurate way. In conclusion, our results showed that the BEST-slope algorithm appeared to yield more accurate estimates of water retention data with reference to three of the four sampled soils. Conversely, determining water retention curves by the -intercept and -steady algorithms may be questionable, since these algorithms overestimated hg yielding independent values of this parameter from the proportionality coefficient γ. (*) The work was supported by the project "STRATEGA, Sperimentazione e TRAsferimento di TEcniche innovative di aGricoltura conservativA", financed by Regione Puglia - Servizio Agricoltura. References Castellini, M., Iovino, M., Pirastru, M., Niedda, M., Bagarello, V., 2016. Use of BEST Procedure to Assess Soil Physical Quality in the Baratz Lake Catchment (Sardinia, Italy). Soil Sci. Soc. Am. J. 80:742-755. doi:10.2136/sssaj2015.11.0389 Di Prima, S., Lassabatere, L., Bagarello, V., Iovino, M., Angulo-Jaramillo, R., 2016. Testing a new automated single ring infiltrometer for Beerkan infiltration experiments. Geoderma 262, 20-34. doi:10.1016/j.geoderma.2015.08.006 Lassabatère, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments-BEST. Soil Sci. Soc. Am. J. 70:521-532. doi:10.2136/sssaj2005.0026 Smettem, K.R.J., Parlange, J.Y., Ross, P.J., Haverkamp, R., 1994. Three-dimensional analysis of infiltration from the disc infiltrometer: 1. A capillary-based theory. Water Resour. Res. 30, 2925-2929. doi:10.1029/94WR01787 Wind, G.P. 1968. Capillary conductivity data estimated by a simple method. In: Water in the Unsaturated Zone, Proceedings of Wageningen Syposium, June 1966 Vol.1 (eds P.E. Rijtema & H Wassink), pp. 181-191, IASAH, Gentbrugge, Belgium.
Mapping Bone Mineral Density Obtained by Quantitative Computed Tomography to Bone Volume Fraction
NASA Technical Reports Server (NTRS)
Pennline, James A.; Mulugeta, Lealem
2017-01-01
Methods for relating or mapping estimates of volumetric Bone Mineral Density (vBMD) obtained by Quantitative Computed Tomography to Bone Volume Fraction (BVF) are outlined mathematically. The methods are based on definitions of bone properties, cited experimental studies and regression relations derived from them for trabecular bone in the proximal femur. Using an experimental range of values in the intertrochanteric region obtained from male and female human subjects, age 18 to 49, the BVF values calculated from four different methods were compared to the experimental average and numerical range. The BVF values computed from the conversion method used data from two sources. One source provided pre bed rest vBMD values in the intertrochanteric region from 24 bed rest subject who participated in a 70 day study. Another source contained preflight vBMD values from 18 astronauts who spent 4 to 6 months on the ISS. To aid the use of a mapping from BMD to BVF, the discussion includes how to formulate them for purpose of computational modeling. An application of the conversions would be used to aid in modeling of time varying changes in vBMD as it relates to changes in BVF via bone remodeling and/or modeling.
Blind estimation of reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.
2003-11-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Online estimation of room reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.
2003-04-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad
2013-01-14
Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.
Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad
2013-01-01
Introduction: Growth charts are widely used to assess children’s growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. Methods: A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. Results: The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children’s weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. Conclusion: The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn’t require restricted assumptions, proposed for estimation reference curves and normal values. PMID:23618470
On the unification of geodetic leveling datums using satellite altimetry
NASA Technical Reports Server (NTRS)
Mather, R. S.; Rizos, C.; Morrison, T.
1978-01-01
Techniques are described for determining the height of Mean Sea Level (MSL) at coastal sites from satellite altimetry. Such information is of value in the adjustment of continental leveling networks. Numerical results are obtained from the 1977 GEOS-3 altimetry data bank at Goddard Space Flight Center using the Bermuda calibration of the altimeter. Estimates are made of the heights of MSL at the leveling datums for Australia and a hypothetical Galveston datum for central North America. The results obtained are in reasonable agreement with oceanographic estimates obtained by extrapolation. It is concluded that all gravity data in the Australian bank AUSGAD 76 and in the Rapp data file for central North America refer to the GEOS-3 altimeter geoid for 1976.0 with uncertainties which do not exceed + or - 0.1 mGal.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
NASA Technical Reports Server (NTRS)
Kurmanaliyev, T. I.; Breslavets, A. V.
1974-01-01
The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.
NASA Technical Reports Server (NTRS)
MacConochie, Ian O.; White, Nancy H.; Mills, Janelle C.
2004-01-01
A program, entitled Weights, Areas, and Mass Properties (or WAMI) is centered around an array of menus that contain constants that can be used in various mass estimating relationships for the ultimate purpose of obtaining the mass properties of Earth-to-Orbit Transports. The current Shuttle mass property data was relied upon heavily for baseline equation constant values from which other options were derived.
Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; Essa, K. S.
2015-02-01
We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.
Bias in estimating accuracy of a binary screening test with differential disease verification
Brinton, John T.; Ringham, Brandy M.; Glueck, Deborah H.
2011-01-01
SUMMARY Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used an imperfect reference test that is less than 100% sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem. PMID:21495059
A new slit lamp-based technique for anterior chamber angle estimation.
Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A
2014-06-01
To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p < 0.001) and 0.79 (p < 0.001) for each slit lamp. Sensitivity values of 100 and 87.5% and specificity values of 75 and 81.2%, depending on the slit lamp used, were obtained for the SLACE technique as compared with gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan
2017-01-01
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770
Barzaghi, Riccardo; Carrion, Daniela; Pepe, Massimiliano; Prezioso, Giuseppina
2016-07-26
Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively), their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η) values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η) global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper) that must be defined by simulations.
Barzaghi, Riccardo; Carrion, Daniela; Pepe, Massimiliano; Prezioso, Giuseppina
2016-01-01
Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively), their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η) values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η) global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper) that must be defined by simulations. PMID:27472333
NASA Astrophysics Data System (ADS)
Samanta, Suman; Patra, Pulak Kumar; Banerjee, Saon; Narsimhaiah, Lakshmi; Sarath Chandran, M. A.; Vijaya Kumar, P.; Bandyopadhyay, Sanjib
2018-06-01
In developing countries like India, global solar radiation (GSR) is measured at very few locations due to non-availability of radiation measuring instruments. To overcome the inadequacy of GSR measurements, scientists developed many empirical models to estimate location-wise GSR. In the present study, three simple forms of Angstrom equation [Angstrom-Prescott (A-P), Ogelman, and Bahel] were used to estimate GSR at six geographically and climatologically different locations across India with an objective to find out a set of common constants usable for whole country. Results showed that GSR values varied from 9.86 to 24.85 MJ m-2 day-1 for different stations. It was also observed that A-P model showed smaller errors than Ogelman and Bahel models. All the models well estimated GSR, as the 1:1 line between measured and estimated values showed Nash-Sutcliffe efficiency (NSE) values ≥ 0.81 for all locations. Measured data of GSR pooled over six selected locations was analyzed to obtain a new set of constants for A-P equation which can be applicable throughout the country. The set of constants (a = 0.29 and b = 0.40) was named as "One India One Constant (OIOC)," and the model was named as "MOIOC." Furthermore, the developed constants are validated statistically for another six locations of India and produce close estimation. High R 2 values (≥ 76%) along with low mean bias error (MBE) ranging from - 0.64 to 0.05 MJ m-2 day-1 revealed that the new constants are able to predict GSR with lesser percentage of error.
Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.
Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison
2016-11-01
MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.
Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wildenschild, D; Berge, P A; Berryman, K G
1999-01-15
The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less
NASA Astrophysics Data System (ADS)
Yeghikyan, Ararat
2018-04-01
Based on the analogy between interacting stellar winds of planetary nebulae and WR-nebulae, on the one hand, and the heliosphere and the expanding envelopes of supernovae, on the other, an attempt is made to calculate the differential intensity of the energetic protons accelerated to energies of 100 MeV by the shock wave. The proposed one-parameter formula for estimating the intensity at 1-100 MeV, when applied to the heliosphere, shows good agreement with the Voyager-1 data, to within a factor of less than 2. The same estimate for planetary (and WR-) nebulae yields a value 7-8 (3-4) orders of magnitude higher than the mean galactic intensity value. The obtained estimate of the intensity of energetic protons in mentioned kinds of nebulae was used to estimate the doses of irradiation of certain substances, in order to show that such accelerated particles play an important role in radiation-chemical transformations in such nebulae.
The tangential velocity of M31: CLUES from constrained simulations
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent
2016-07-01
Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.
Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model
Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz
2014-01-01
Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915
Slope angle estimation method based on sparse subspace clustering for probe safe landing
NASA Astrophysics Data System (ADS)
Li, Haibo; Cao, Yunfeng; Ding, Meng; Zhuang, Likui
2018-06-01
To avoid planetary probes landing on steep slopes where they may slip or tip over, a new method of slope angle estimation based on sparse subspace clustering is proposed to improve accuracy. First, a coordinate system is defined and established to describe the measured data of light detection and ranging (LIDAR). Second, this data is processed and expressed with a sparse representation. Third, on this basis, the data is made to cluster to determine which subspace it belongs to. Fourth, eliminating outliers in subspace, the correct data points are used for the fitting planes. Finally, the vectors normal to the planes are obtained using the plane model, and the angle between the normal vectors is obtained through calculation. Based on the geometric relationship, this angle is equal in value to the slope angle. The proposed method was tested in a series of experiments. The experimental results show that this method can effectively estimate the slope angle, can overcome the influence of noise and obtain an exact slope angle. Compared with other methods, this method can minimize the measuring errors and further improve the estimation accuracy of the slope angle.
NASA Astrophysics Data System (ADS)
Soret, Marine; Alaoui, Jawad; Koulibaly, Pierre M.; Darcourt, Jacques; Buvat, Irène
2007-02-01
ObjectivesPartial volume effect (PVE) is a major source of bias in brain SPECT imaging of dopamine transporter. Various PVE corrections (PVC) making use of anatomical data have been developed and yield encouraging results. However, their accuracy in clinical data is difficult to demonstrate because the gold standard (GS) is usually unknown. The objective of this study was to assess the accuracy of PVC. MethodTwenty-three patients underwent MRI and 123I-FP-CIT SPECT. The binding potential (BP) values were measured in the striata segmented on the MR images after coregistration to SPECT images. These values were calculated without and with an original PVC. In addition, for each patient, a Monte Carlo simulation of the SPECT scan was performed. For these simulations where true simulated BP values were known, percent biases in BP estimates were calculated. For the real data, an evaluation method that simultaneously estimates the GS and a quadratic relationship between the observed and the GS values was used. It yields a surrogate mean square error (sMSE) between the estimated values and the estimated GS values. ResultsThe averaged percent difference between BP measured for real and for simulated patients was 0.7±9.7% without PVC and was -8.5±14.5% with PVC, suggesting that the simulated data reproduced the real data well enough. For the simulated patients, BP was underestimated by 66.6±9.3% on average without PVC and overestimated by 11.3±9.5% with PVC, demonstrating the greatest accuracy of BP estimates with PVC. For the simulated data, sMSE were 27.3 without PVC and 0.90 with PVC, confirming that our sMSE index properly captured the greatest accuracy of BP estimates with PVC. For the real patient data, sMSE was 50.8 without PVC and 3.5 with PVC. These results were consistent with those obtained on the simulated data, suggesting that for clinical data, and despite probable segmentation and registration errors, BP were more accurately estimated with PVC than without. ConclusionPVC was very efficient to greatly reduce the error in BP estimates in clinical imaging of dopamine transporter.
Werb, Dan; Nosyk, Bohdan; Kerr, Thomas; Fischer, Benedikt; Montaner, Julio; Wood, Evan
2012-11-01
British Columbia (BC), Canada, is home to a large illegal cannabis industry that is known to contribute to substantial organized crime concerns. Although debates have emerged regarding the potential benefits of a legally regulated market to address a range of drug policy-related social problems, the value of the local (i.e., domestically consumed) cannabis market has not been characterized. Monte Carlo simulation methods were used to generate a median value and 95% credibility interval for retail expenditure estimates of the domestic cannabis market in BC. Model parameter estimates were obtained for the number of cannabis users, the frequency of cannabis use, the quantity of cannabis used, and the price of cannabis from government surveillance data and studies of BC cannabis users. The median annual estimated retail expenditure on cannabis by British Columbians was $407 million (95% Credibility Interval [CI]: $169-948 million). Daily users accounted for the bulk of the cannabis revenue, with a median estimated expenditure of approximately $357 million (95% CI: $149-845 million), followed by weekly users ($44 million, 95% CI: $18-90 million), and monthly users ($6 million, 95% CI: $3-12 million). When under-reporting of cannabis use was adjusted for, the estimated retail expenditure ranged from $443 million (95% CI: $185-1 billion) to $564 million (95% CI: $236-1.3 billion). Based on local consumption patterns, conservative estimates suggest that BC's domestic illegal cannabis trade is worth hundreds of millions of dollars annually. Given the value of this market and the failure and harms of law enforcement efforts to control the cannabis market, policymakers should consider regulatory alternatives. Copyright © 2012 Elsevier B.V. All rights reserved.
Abbott, Marvin M.; DeHay, Kelli
2008-01-01
The Ada-Vamoosa aquifer of northeastern Oklahoma is a sedimentary bedrock aquifer of Pennsylvanian age that crops out over 800 square miles of the Osage Reservation. The Osage Nation needed additional information regarding the production potential of the aquifer to aid them in future development planning. To address this need, the U.S. Geological Survey, in cooperation with the Osage Nation, conducted a study of aquifer properties in the Ada-Vamoosa aquifer. This report presents the results of the aquifer tests from 20 wells in the Ada-Vamoosa aquifer and one well in a minor aquifer east of the Ada-Vamoosa outcrop on the Osage Reservation. Well information for 17 of the 21 wells in this report was obtained from the Indian Health Service. Data collected by the U.S. Geological Survey during this investigation are pumping well data from four domestic wells collected during the summer of 2006. Transmissivity values were calculated from well pumping data or were estimated from specific capacity values depending on the reliability of the data. The estimated transmissivity values are 1.1 to 4.3 times greater than the calculated transmissivity values. The calculated and estimated transmissivity values range from 5 to 1,000 feet squared per day.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landefeld, T.D.; Byrne, M.D.; Campbell, K.L.
1981-12-01
The alpha- and beta-subunits of hCG were radioiodinated and recombined with unlabeled complementary subunits. The resultant recombined hormones, selectively labeled in either the alpha- or beta-subunit, were separated from unrecombined subunit by sodium dodecyl sulfate-polyacrylamide gel electrophoresis, extracted with Triton X-100, and characterized by binding analysis. The estimates of maximum binding (active fraction) of the two resultant selectively labeled, recombined hCG preparations, determined with excess receptor were 0.41 and 0.59. These values are similar to those obtained when hCG is labeled as an intact molecule. The specific activities of the recombined preparations were estimated by four different methods, and themore » resulting values were used in combination with the active fraction estimates to determine the concentrations of active free and bound hormone. Binding analyses were run using varying concentrations of both labeled and unlabeled hormone. Estimates of the equilibrium dissociation binding constant (Kd) and receptor capacity were calculated in three different ways. The mean estimates of capacity (52.6 and 52.7 fmol/mg tissue) and Kd (66.6 and 65.7 pM) for the two preparations were indistinguishable. Additionally, these values were similar to values reported previously for hCG radioiodinated as an intact molecule. The availability of well characterized, selectively labeled hCG preparations provides new tools for studying the mechanism of action and the target cell processing of the subunits of this hormone.« less
[Quantitative estimation of evapotranspiration from Tahe forest ecosystem, Northeast China].
Qu, Di; Fan, Wen-Yi; Yang, Jin-Ming; Wang, Xu-Peng
2014-06-01
Evapotranspiration (ET) is an important parameter of agriculture, meteorology and hydrology research, and also an important part of the global hydrological cycle. This paper applied the improved DHSVM distributed hydrological model to estimate daily ET of Tahe area in 2007 using leaf area index and other surface data extracted TM remote sensing data, and slope, aspect and other topographic indices obtained by using the digital elevation model. The relationship between daily ET and daily watershed outlet flow was built by the BP neural network, and a water balance equation was established for the studied watershed, together to test the accuracy of the estimation. The results showed that the model could be applied in the study area. The annual total ET of Tahe watershed was 234.01 mm. ET had a significant seasonal variation. The ET had the highest value in summer and the average daily ET value was 1.56 mm. The average daily ET in autumn and spring were 0.30, 0.29 mm, respectively, and winter had the lowest ET value. Land cover type had a great effect on ET value, and the broadleaf forest had a higher ET ability than the mixed forest, followed by the needle leaf forest.
Liu, Kailin; Cao, Zhengya; Pan, Xiong; Yu, Yunlong
2012-08-01
The phytotoxicity of an herbicide in soil is typically dependent on the soil characteristics. To obtain a comparable value of the concentration that inhibits growth by 50% (IC50), 0.01 M CaCl(2) , excess pore water (EPW) and in situ pore water (IPW) were used to extract the bioavailable fraction of nicosulfuron from five different soils to estimate the nicosulfuron phytotoxicity to corn (Zea mays L.). The results indicated that the phytotoxicity of nicosulfuron in soils to corn depended on the soil type, and the IC50 values calculated based on the amended concentration of nicosulfuron ranged from 0.77 to 9.77 mg/kg among the five tested soils. The range of variation in IC50 values for nicosulfuron was smaller when the concentrations of nicosulfuron extracted with 0.01 M CaCl(2) and EPW were used instead of the amended concentration. No significant difference was observed among the IC50 values calculated from the IPW concentrations of nicosulfuron in the five tested soils, suggesting that the concentration of nicosulfuron in IPW could be used to estimate the phytotoxicity of residual nicosulfuron in soils. Copyright © 2012 SETAC.
Klement, Aleš; Kodešová, Radka; Bauerová, Martina; Golovko, Oksana; Kočárek, Martin; Fér, Miroslav; Koba, Olga; Nikodem, Antonín; Grabic, Roman
2018-03-01
The sorption of 3 pharmaceuticals, which may exist in 4 different forms depending on the solution pH (irbesartan in cationic, neutral and anionic, fexofenadine in cationic, zwitter-ionic and anionic, and citalopram cationic and neutral), in seven different soils was studied. The measured sorption isotherms were described by Freundlich equations, and the sorption coefficients, K F (for the fixed n exponent for each compound), were related to the soil properties to derive relationships for estimating the sorption coefficients from the soil properties (i.e., pedotransfer rules). The largest sorption was obtained for citalopram (average K F value for n = 1 was 1838 cm 3 g -1 ) followed by fexofenadine (K F = 35.1 cm 3/n μg 1-1/n g -1 , n = 1.19) and irbesartan (K F = 3.96 cm 3/n μg 1-1/n g -1 , n = 1.10). The behavior of citalopram (CIT) in soils was different than the behaviors of irbesartan (IRB) and fexofenadine (FEX). Different trends were documented according to the correlation coefficients between the K F values for different compounds (R IRB,FEX = 0.895, p-value<0.01; R IRB,CIT = -0.835, p-value<0.05; R FEX,CIT = -0.759, p-value<0.05) and by the reverse relationships between the K F values and soil properties in the pedotransfer functions. While the K F value for citalopram was positively related to base cation saturation (BCS) or sorption complex saturation (SCS) and negatively correlated to the organic carbon content (Cox), the K F values of irbesartan and fexofenadine were negatively related to BCS, SCS or the clay content and positively related to Cox. The best estimates were obtained by combining BCS and Cox for citalopram (R 2 = 93.4), SCS and Cox for irbesartan (R 2 = 96.3), and clay content and Cox for fexofenadine (R 2 = 82.9). Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
Measurement of lung expansion with computed tomography and comparison with quantitative histology.
Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C
1995-11-01
The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.
Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions
Fernandes, Bruno J. T.; Roque, Alexandre
2018-01-01
Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care. PMID:29651366
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Immersion Refractometry of Isolated Bacterial Cell Walls
Marquis, Robert E.
1973-01-01
Immersion-refractometric and light-scattering measurements were adapted to determinations of average refractive indices and physical compactness of isolated bacterial cell walls. The structures were immersed in solutions containing various concentrations of polymer molecules that cannot penetrate into wall pores, and then an estimate was made of the polymer concentration or the refractive index of the polymer solution in which light scattering was reduced to zero. Because each wall preparation was heterogeneous, the refractive index of the medium for zero light scattering had to be estimated by extrapolation. Refractive indices for walls suspended in bovine serum albumin solutions ranged from 1.348 for walls of the rod form of Arthrobacter crystallopoietes to 1.382 for walls of the teichoic acid deficient, 52A5 strain of Staphylococcus aureus. These indices were used to calculate approximate values for solids content per milliliter, and the calculated values agreed closely with those estimated from a knowledge of dextran-impermeable volumes per gram, dry weight, of the walls. When large molecules such as dextrans or serum albumin were used for immersion refractometry, the refractive indices obtained were for entire walls, including both wall polymers and wall water. When smaller molecules that can penetrate wall pores to various extents were used with Micrococcus lysodeikticus walls, the average, apparent refractive index of the structures increased as the molecular size of probing molecules was decreased. It was possible to obtain an estimate of 1.45 to 1.46 for the refractive index of wall polymers, predominantly peptidoglycans in this case, by extrapolating the curve for refractive index versus molecular radius to a value of 0.2 nm, the approximate radius of a water molecule. This relatively low value for polymer refractive index was interpreted as evidence in favor of the amorphous, elastic model of peptidoglycan structure and against the crystalline, rigid model. PMID:4201772
Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P
2014-04-16
Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.
Guillaume, François; Fritz, Sébastien; Boichard, Didier; Druet, Tom
2008-01-01
The efficiency of the French marker-assisted selection (MAS) was estimated by a simulation study. The data files of two different time periods were used: April 2004 and 2006. The simulation method used the structure of the existing French MAS: same pedigree, same marker genotypes and same animals with records. The program simulated breeding values and new records based on this existing structure and knowledge on the QTL used in MAS (variance and frequency). Reliabilities of genetic values of young animals (less than one year old) obtained with and without marker information were compared to assess the efficiency of MAS for evaluation of milk, fat and protein yields and fat and protein contents. Mean gains of reliability ranged from 0.015 to 0.094 and from 0.038 to 0.114 in 2004 and 2006, respectively. The larger number of animals genotyped and the use of a new set of genetic markers can explain the improvement of MAS reliability from 2004 to 2006. This improvement was also observed by analysis of information content for young candidates. The gain of MAS reliability with respect to classical selection was larger for sons of sires with genotyped progeny daughters with records. Finally, it was shown that when superiority of MAS over classical selection was estimated with daughter yield deviations obtained after progeny test instead of true breeding values, the gain was underestimated. PMID:18096117
Simulation of the airwave caused by the Chelyabinsk superbolide
NASA Astrophysics Data System (ADS)
Avramenko, Mikhail I.; Glazyrin, Igor V.; Ionov, Gennady V.; Karpeev, Artem V.
2014-06-01
Numerical simulations were carried out to model the propagation of an airwave from the fireball that passed over Chelyabinsk (Russia) on 15 February 2013. The airburst of the Chelyabinsk meteoroid occurred due to its catastrophic fragmentation in the atmosphere. Simulations of the space-time distribution of energy deposition during the airburst were done using a novel fragmentation model based on dimensionality considerations and analogy to the fission chain reaction in fissile materials. To get an estimate of the airburst energy, observed values of the airwave arrival times to different populated localities were retrieved from video records available on the Internet. The calculated arrival times agree well with the observed values for all the localities. Energy deposition in the atmosphere obtained from observations of the airwave arrival times was found to be 460 ± 60 kt in trinitrotoluene (TNT) equivalent. We also obtained an independent estimate for the deposited energy, 450-160+200 kt TNT from detecting the air increment velocity due to the wave passage in Chelyabinsk. Assuming that the energy of about 90 kt TNT was irradiated in the form of visible light and infrared radiation, as registered with optical sensors [Yeomans and Chodas, 2013], one can value the total energy release to be about 550 kt TNT which is in agreement with previous estimates from infrasound registration and from optical sensors data. The overpressure amplitude and its positive phase duration in the airwave that reached the city of Chelyabinsk were calculated to be about 2 kPa and 10 s accordingly.
NASA Astrophysics Data System (ADS)
Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.
2016-12-01
A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.
Wilcox, Thomas P; Zwickl, Derrick J; Heath, Tracy A; Hillis, David M
2002-11-01
Four New World genera of dwarf boas (Exiliboa, Trachyboa, Tropidophis, and Ungaliophis) have been placed by many systematists in a single group (traditionally called Tropidophiidae). However, the monophyly of this group has been questioned in several studies. Moreover, the overall relationships among basal snake lineages, including the placement of the dwarf boas, are poorly understood. We obtained mtDNA sequence data for 12S, 16S, and intervening tRNA-val genes from 23 species of snakes representing most major snake lineages, including all four genera of New World dwarf boas. We then examined the phylogenetic position of these species by estimating the phylogeny of the basal snakes. Our phylogenetic analysis suggests that New World dwarf boas are not monophyletic. Instead, we find Exiliboa and Ungaliophis to be most closely related to sand boas (Erycinae), boas (Boinae), and advanced snakes (Caenophidea), whereas Tropidophis and Trachyboa form an independent clade that separated relatively early in snake radiation. Our estimate of snake phylogeny differs significantly in other ways from some previous estimates of snake phylogeny. For instance, pythons do not cluster with boas and sand boas, but instead show a strong relationship with Loxocemus and Xenopeltis. Additionally, uropeltids cluster strongly with Cylindrophis, and together are embedded in what has previously been considered the macrostomatan radiation. These relationships are supported by both bootstrapping (parametric and nonparametric approaches) and Bayesian analysis, although Bayesian support values are consistently higher than those obtained from nonparametric bootstrapping. Simulations show that Bayesian support values represent much better estimates of phylogenetic accuracy than do nonparametric bootstrap support values, at least under the conditions of our study. Copyright 2002 Elsevier Science (USA)
Continuous non-contact vital sign monitoring in neonatal intensive care unit
Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel
2014-01-01
Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal. PMID:26609384
Estimates of electronic coupling for excess electron transfer in DNA
NASA Astrophysics Data System (ADS)
Voityuk, Alexander A.
2005-07-01
Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31G* and extended 6-31++G** basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack.
Mazonakis, Michalis; Sahin, Bunyamin; Pagonidis, Konstantin; Damilakis, John
2011-06-01
The aim of this study was to combine the stereological technique with magnetic resonance (MR) imaging data for the volumetric and functional analysis of the left ventricle (LV). Cardiac MR examinations were performed in 13 consecutive subjects with known or suspected coronary artery disease. The end-diastolic volume (EDV), end-systolic volume, ejection fraction (EF), and mass were estimated by stereology using the entire slice set depicting LV and systematic sampling intensities of 1/2 and 1/3 that provided samples with every second and third slice, respectively. The repeatability of stereology was evaluated. Stereological assessments were compared with the reference values derived by manually tracing the endocardial and epicardial contours on MR images. Stereological EDV and EF estimations obtained by the 1/3 systematic sampling scheme were significantly different from those by manual delineation (P < .05). No difference was observed between the reference values and the LV parameters estimated by the entire slice set or a sampling intensity of 1/2 (P > .05). For these stereological approaches, a high correlation (r(2) = 0.80-0.93) and clinically acceptable limits of agreement were found with the reference method. Stereological estimations obtained by both sample sizes presented comparable coefficient of variation values of 2.9-5.8%. The mean time for stereological measurements on the entire slice set was 3.4 ± 0.6 minutes and it was reduced to 2.5 ± 0.5 minutes with the 1/2 systematic sampling scheme. Stereological analysis on systematic samples of MR slices generated by the 1/2 sampling intensity provided efficient and quick assessment of LV volumes, function, and mass. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Continuous non-contact vital sign monitoring in neonatal intensive care unit.
Villarroel, Mauricio; Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel
2014-09-01
Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.
Electrochemical estimation of the polyphenol index in wines using a laccase biosensor.
Gamella, M; Campuzano, S; Reviejo, A J; Pingarrón, J M
2006-10-18
The use of a laccase biosensor, under both batch and flow injection (FI) conditions, for a rapid and reliable amperometric estimation of the total content of polyphenolic compounds in wines is reported. The enzyme was immobilized by cross-linking with glutaraldehyde onto a glassy carbon electrode. Caffeic acid and gallic acid were selected as standard compounds to carry out such estimation. Experimental variables such as the enzyme loading, the applied potential, and the pH value were optimized, and different aspects regarding the operational stability of the laccase biosensor were evaluated. Using batch amperometry at -200 mV, the detection limits obtained were 2.6 x 10(-3) and 7.2 x 10(-4) mg L(-1) gallic acid and caffeic acid, respectively, which compares advantageously with previous biosensor designs. An extremely simple sample treatment consisting only of an appropriate dilution of wine sample with the supporting electrolyte solution (0.1 mol L(-1) citrate buffer of pH 5.0) was needed for the amperometric analysis of red, rosé, and white wines. Good correlations were found when the polyphenol indices obtained with the biosensor (in both the batch and FI modes) for different wine samples were plotted versus the results achieved with the classic Folin-Ciocalteu method. Application of the calibration transfer chemometric model (multiplicative fitting) allowed that the confidence intervals (for a significance level of 0.05) for the slope and intercept values of the amperometric index versus Folin-Ciocalteu index plots (r = 0.997) included the unit and zero values, respectively. This indicates that the laccase biosensor can be successfully used for the estimation of the polyphenol index in wines when compared with the Folin-Ciocalteu reference method.
NASA Astrophysics Data System (ADS)
Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.
2011-09-01
Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.
Dranitsaris, George; Truter, Ilse; Lubbe, Martie S; Sriramanakoppa, Nitin N; Mendonca, Vivian M; Mahagaonkar, Sangameshwar B
2011-10-01
Decision analysis (DA) is commonly used to perform economic evaluations of new pharmaceuticals. Using multiples of Malaysia's per capita 2010 gross domestic product (GDP) as the threshold for economic value as suggested by the World Health Organization (WHO), DA was used to estimate a price per dose for bevacizumab, a drug that provides a 1.4-month survival benefit in patients with metastatic colorectal cancer (mCRC). A decision model was developed to simulate progression-free and overall survival in mCRC patients receiving chemotherapy with and without bevacizumab. Costs for chemotherapy and management of side effects were obtained from public and private hospitals in Malaysia. Utility estimates, measured as quality-adjusted life years (QALYs), were determined by interviewing 24 oncology nurses using the time trade-off technique. The price per dose was then estimated using a target threshold of US$44 400 per QALY gained, which is 3 times the Malaysian per capita GDP. A cost-effective price for bevacizumab could not be determined because the survival benefit provided was insufficient According to the WHO criteria, if the drug was able to improve survival from 1.4 to 3 or 6 months, the price per dose would be $567 and $1258, respectively. The use of decision modelling for estimating drug pricing is a powerful technique to ensure value for money. Such information is of value to drug manufacturers and formulary committees because it facilitates negotiations for value-based pricing in a given jurisdiction.
Anaerobic Degradation of Phthalate Isomers by Methanogenic Consortia
Kleerebezem, Robbert; Pol, Look W. Hulshoff; Lettinga, Gatze
1999-01-01
Three methanogenic enrichment cultures, grown on ortho-phthalate, iso-phthalate, or terephthalate were obtained from digested sewage sludge or methanogenic granular sludge. Cultures grown on one of the phthalate isomers were not capable of degrading the other phthalate isomers. All three cultures had the ability to degrade benzoate. Maximum specific growth rates (μSmax) and biomass yields (YXtotS) of the mixed cultures were determined by using both the phthalate isomers and benzoate as substrates. Comparable values for these parameters were found for all three cultures. Values for μSmax and YXtotS were higher for growth on benzoate compared to the phthalate isomers. Based on measured and estimated values for the microbial yield of the methanogens in the mixed culture, specific yields for the phthalate and benzoate fermenting organisms were calculated. A kinetic model, involving three microbial species, was developed to predict intermediate acetate and hydrogen accumulation and the final production of methane. Values for the ratio of the concentrations of methanogenic organisms, versus the phthalate isomer and benzoate fermenting organisms, and apparent half-saturation constants (KS) for the methanogens were calculated. By using this combination of measured and estimated parameter values, a reasonable description of intermediate accumulation and methane formation was obtained, with the initial concentration of phthalate fermenting organisms being the only variable. The energetic efficiency for growth of the fermenting organisms on the phthalate isomers was calculated to be significantly smaller than for growth on benzoate. PMID:10049876
A Bayesian kriging approach for blending satellite and ground precipitation observations
Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.
2015-01-01
Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.
Konovalov, S I; Kuz'menko, A G
2010-12-01
By means of a computational method, the possibility of radiating a short acoustic pulse by a transducer in the form of a piezoceramic sphere internally filled with liquid is investigated. An electric inductive-resistive circuit is connected to the electric input of the transducer. Solution is obtained based on scheme-analogs theory for piezoceramic transducers, and spectral Fourier transform theory. The values of parameters of the system, providing minimal durations of radiated signals, are determined. Computation was carried out for different values of relative thicknesses of the transducer wall. The estimates of durations and amplitudes of the acoustic signals radiated into the external medium are obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geslot, Benoit; Pepino, Alexandra; Blaise, Patrick
A pile noise measurement campaign has been conducted by the CEA in the VENUS-F reactor (SCK-CEN, Mol Belgium) in April 2011 in the reference critical configuration of the GUINEVERE experimental program. The experimental setup made it possible to estimate the core kinetic parameters: the prompt neutron decay constant, the delayed neutron fraction and the generation time. A precise assessment of these constants is of prime importance. In particular, the effective delayed neutron fraction is used to normalize and compare calculated reactivities of different subcritical configurations, obtained by modifying either the core layout or the control rods position, with experimental onesmore » deduced from the analysis of measurements. This paper presents results obtained with a CEA-developed time stamping acquisition system. Data were analyzed using Rossi-α and Feynman-α methods. Results were normalized to reactor power using a calibrated fission chamber with a deposit of Np-237. Calculated factors were necessary to the analysis: the Diven factor was computed by the ENEA (Italy) and the power calibration factor by the CNRS/IN2P3/LPC Caen. Results deduced with both methods are consistent with respect to calculated quantities. Recommended values are given by the Rossi-α estimator, that was found to be the most robust. The neutron generation time was found equal to 0.438 ± 0.009 μs and the effective delayed neutron fraction is 765 ± 8 pcm. Discrepancies with the calculated value (722 pcm, calculation from ENEA) are satisfactory: -5.6% for the Rossi-α estimate and -2.7% for the Feynman-α estimate. (authors)« less
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.
NASA Astrophysics Data System (ADS)
Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.
2018-06-01
Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, L.; Hill, W.J.
A method is proposed to estimate the effect of long-term variations in total ozone on the error incurred in determining a trend in total ozone due to man-made effects. When this method is applied to data from Arosa, Switzerland over the years 1932--1980, a component of the standard error of the trend estimate equal to 0.6 percent per decade is obtained. If this estimate of long-term trend variability at Arosa is not too different from global long-term trend variability, then the threshold ( +- 2 standard errors) for detecting an ozone trend in the 1970's that is outside of whatmore » could be expected from natural variation alone and hence be man-made would range from 1.35% (Reinsel et al, 1981) to 1.8%. The latter value is obtained by combining the Reinsel et al result with the result here, assuming that the error variations that both studies measure are independent and additive. Estimates for long-term trend variation over other time periods are also derived. Simulations that measure the precision of the estimate of long-term variability are reported.« less
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
Estimation of Geodetic and Geodynamical Parameters with VieVS
NASA Technical Reports Server (NTRS)
Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald
2010-01-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.
NASA Technical Reports Server (NTRS)
Loeppky, J. A.; Kobayashi, Y.; Venters, M. D.; Luft, U. C.
1979-01-01
Blood samples were obtained from forearm vein or artery with indwelling cannula (1) before, (2) during the last min, and (3) about 2 min after lower body negative pressure (LBNP) in 16 experiments to determine whether plasma volume (PV) estimates were affected by regional hemoconcentration in the lower body. Total hemoglobin (THb) was estimated with the CO method prior to LBNP. Hemoglobin (Hb) and hematocrit (Hct) values from (2) gave only a 3% (87 ml) loss in PV due to LBNP, assuming no change in THb. However, Hb and Hct values from (3) showed an 11% loss in PV (313 ml). This 72% underestimation of PV loss with (2) must have resulted from the sequestration of blood and subsequent hemoconcentration in the lower body during LBNP. The effects of LBNP on PV should be estimated 1-3 min after exposure, after mixing but before extravascular fluid returns to the circulation.
Hidalgo, Marta R.; Cubuk, Cankut; Amadoz, Alicia; Salavert, Francisco; Carbonell-Caballero, José; Dopazo, Joaquin
2017-01-01
Understanding the aspects of the cell functionality that account for disease or drug action mechanisms is a main challenge for precision medicine. Here we propose a new method that models cell signaling using biological knowledge on signal transduction. The method recodes individual gene expression values (and/or gene mutations) into accurate measurements of changes in the activity of signaling circuits, which ultimately constitute high-throughput estimations of cell functionalities caused by gene activity within the pathway. Moreover, such estimations can be obtained either at cohort-level, in case/control comparisons, or personalized for individual patients. The accuracy of the method is demonstrated in an extensive analysis involving 5640 patients from 12 different cancer types. Circuit activity measurements not only have a high diagnostic value but also can be related to relevant disease outcomes such as survival, and can be used to assess therapeutic interventions. PMID:28042959
Komsta, Łukasz; Stępkowska, Barbara; Skibiński, Robert
2017-02-03
The eluotropic strength on thin-layer silica plates was investigated for 20 chromatographic grade solvents available in current market. 35 model compounds were used as test subjects in the investigation. The use of modern mixture screening design allowed to estimate each solvent as a separate elution coefficient with an acceptable error of estimation (0.0913 of R M value). Additional bootstrapping technique was used to check the distribution and uncertainty of eluotropic estimates, proving very similar confidence intervals to linear regression. Principal component analysis proved that the only one parameter (mean eluotropic strength) is satisfactory to describe the solvent property, as it explains almost 90% of variance of retention. The obtained eluotropic data can be good appendix to earlier published results and their values can be interpreted in context of R M differences. Copyright © 2017 Elsevier B.V. All rights reserved.
Komsta, Łukasz; Stępkowska, Barbara; Skibiński, Robert
2017-01-04
The eluotropic strength on thin-layer silica plates was investigated for 20 chromatographic grade solvents available in current market. 35 model compounds were used as test subjects in the investigation. The use of modern mixture screening design allowed to estimate each solvent as a separate elution coefficient with an acceptable error of estimation (0.0913 of R M value). Additional bootstrapping technique was used to check the distribution and uncertainty of eluotropic estimates, proving very similar confidence intervals to linear regression. Principal component analysis proved that the only one parameter (mean eluotropic strength) is satisfactory to describe the solvent property, as it explains almost 90% of variance of retention. The obtained eluotropic data can be good appendix to earlier published results and their values can be interpreted in context of R M differences. Copyright © 2017 Elsevier B.V. All rights reserved.
Regression dilution in the proportional hazards model.
Hughes, M D
1993-12-01
The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.
The Protolysis of Singlet Excited B-Naphtol.
ERIC Educational Resources Information Center
van Stam, Jan; Lofroth, Jan-Erik
1986-01-01
Presents a two-day experiment to estimate the pK for the protolysis of beta-naphtol in its ground state and the first singlet excited state. Results are compared to results obtained from the integrated rate equations in which values of the rate constants were taken from a time-resolved study. (JN)
Indirect NMR spin-spin coupling constants in diatomic alkali halides
NASA Astrophysics Data System (ADS)
Jaszuński, Michał; Antušek, Andrej; Demissie, Taye B.; Komorovsky, Stanislav; Repisky, Michal; Ruud, Kenneth
2016-12-01
We report the Nuclear Magnetic Resonance (NMR) spin-spin coupling constants for diatomic alkali halides MX, where M = Li, Na, K, Rb, or Cs and X = F, Cl, Br, or I. The coupling constants are determined by supplementing the non-relativistic coupled-cluster singles-and-doubles (CCSD) values with relativistic corrections evaluated at the four-component density-functional theory (DFT) level. These corrections are calculated as the differences between relativistic and non-relativistic values determined using the PBE0 functional with 50% exact-exchange admixture. The total coupling constants obtained in this approach are in much better agreement with experiment than the standard relativistic DFT values with 25% exact-exchange, and are also noticeably better than the relativistic PBE0 results obtained with 50% exact-exchange. Further improvement is achieved by adding rovibrational corrections, estimated using literature data.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard
1994-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
Alvarenga, André V; Teixeira, César A; Ruano, Maria Graça; Pereira, Wagner C A
2010-02-01
In this work, the feasibility of texture parameters extracted from B-Mode images were explored in quantifying medium temperature variation. The goal is to understand how parameters obtained from the gray-level content can be used to improve the actual state-of-the-art methods for non-invasive temperature estimation (NITE). B-Mode images were collected from a tissue mimic phantom heated in a water bath. The phantom is a mixture of water, glycerin, agar-agar and graphite powder. This mixture aims to have similar acoustical properties to in vivo muscle. Images from the phantom were collected using an ultrasound system that has a mechanical sector transducer working at 3.5 MHz. Three temperature curves were collected, and variations between 27 and 44 degrees C during 60 min were allowed. Two parameters (correlation and entropy) were determined from Grey-Level Co-occurrence Matrix (GLCM) extracted from image, and then assessed for non-invasive temperature estimation. Entropy values were capable of identifying variations of 2.0 degrees C. Besides, it was possible to quantify variations from normal human body temperature (37 degrees C) to critical values, as 41 degrees C. In contrast, despite correlation parameter values (obtained from GLCM) presented a correlation coefficient of 0.84 with temperature variation, the high dispersion of values limited the temperature assessment.
Menu driven heat treatment control of thin walled bodies
Kothmann, Richard E.; Booth, Jr., Russell R.; Grimm, Noel P.; Batenburg, Abram; Thomas, Vaughn M.
1992-01-01
A process for controlling the heating of a thin-walled body according to a predetermined temperature program by means of electrically controllable heaters, comprising: disposing the heaters adjacent one surface of the body such that each heater is in facing relation with a respective zone of the surface; supplying heat-generating power to each heater and monitoring the temperature at each surface zone; and for each zone: deriving (16,18,20), on the basis of the temperature values obtained in the monitoring step, estimated temperature values of the surface at successive time intervals each having a first selected duration; generating (28), on the basis of the estimated temperature values derived in each time interval, representations of the temperature, THSIFUT, which each surface zone will have, based on the level of power presently supplied to each heater, at a future time which is separated from the present time interval by a second selected duration; determining (30) the difference between THSIFUT and the desired temperature, FUTREFTVZL, at the future time which is separated from the present time interval by the second selected duration; providing (52) a representation indicating the power level which sould be supplied to each heater in order to reduce the difference obtained in the determining step; and adjusting the power level supplied to each heater by the supplying step in response to the value of the representation provided in the providing step.
Density Measurements of Low Silica CaO-SiO2-Al2O3 Slags
NASA Astrophysics Data System (ADS)
Muhmood, Luckman; Seetharaman, Seshadri
2010-08-01
Density measurements of a low-silica CaO-SiO2-Al2O3 system were carried out using the Archimedes principle. A Pt 30 pct Rh bob and wire arrangement was used for this purpose. The results obtained were in good agreement with those obtained from the model developed in the current group as well as with other results reported earlier. The density for the CaO-SiO2 and the CaO-Al2O3 binary slag systems also was estimated from the ternary values. The extrapolation of density values for high-silica systems also showed good agreement with previous works. An estimation for the density value of CaO was made from the current experimental data. The density decrease at high temperatures was interpreted based on the silicate structure. As the mole percent of SiO2 was below the 33 pct required for the orthosilicate composition, discrete {text{SiO}}4^{4 - } tetrahedral units in the silicate melt would exist along with O2- ions. The change in melt expansivity may be attributed to the ionic expansions in the order of {text{Al}}^{ 3+ } - {text{O}}^{ 2- } < {text{Ca}}^{ 2+ } - {text{O}}^{ 2- } < {text{Ca}}^{ 2+ } - {text{O}}^{ - } Structural changes in the ternary slag also could be correlated to a drastic change in the value of enthalpy of mixing.
Beaulieu, J; Doerksen, T; Clément, S; MacKay, J; Bousquet, J
2014-01-01
Genomic selection (GS) is of interest in breeding because of its potential for predicting the genetic value of individuals and increasing genetic gains per unit of time. To date, very few studies have reported empirical results of GS potential in the context of large population sizes and long breeding cycles such as for boreal trees. In this study, we assessed the effectiveness of marker-aided selection in an undomesticated white spruce (Picea glauca (Moench) Voss) population of large effective size using a GS approach. A discovery population of 1694 trees representative of 214 open-pollinated families from 43 natural populations was phenotyped for 12 wood and growth traits and genotyped for 6385 single-nucleotide polymorphisms (SNPs) mined in 2660 gene sequences. GS models were built to predict estimated breeding values using all the available SNPs or SNP subsets of the largest absolute effects, and they were validated using various cross-validation schemes. The accuracy of genomic estimated breeding values (GEBVs) varied from 0.327 to 0.435 when the training and the validation data sets shared half-sibs that were on average 90% of the accuracies achieved through traditionally estimated breeding values. The trend was also the same for validation across sites. As expected, the accuracy of GEBVs obtained after cross-validation with individuals of unknown relatedness was lower with about half of the accuracy achieved when half-sibs were present. We showed that with the marker densities used in the current study, predictions with low to moderate accuracy could be obtained within a large undomesticated population of related individuals, potentially resulting in larger gains per unit of time with GS than with the traditional approach. PMID:24781808
Q estimation of seismic data using the generalized S-transform
NASA Astrophysics Data System (ADS)
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
Visual classification of medical data using MLP mapping.
Cağatay Güler, E; Sankur, B; Kahya, Y P; Raudys, S
1998-05-01
In this work we discuss the design of a novel non-linear mapping method for visual classification based on multilayer perceptrons (MLP) and assigned class target values. In training the perceptron, one or more target output values for each class in a 2-dimensional space are used. In other words, class membership information is interpreted visually as closeness to target values in a 2D feature space. This mapping is obtained by training the multilayer perceptron (MLP) using class membership information, input data and judiciously chosen target values. Weights are estimated in such a way that each training feature of the corresponding class is forced to be mapped onto the corresponding 2-dimensional target value.
On the estimation of sound speed in two-dimensional Yukawa fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, I. L., E-mail: Igor.Semenov@dlr.de; Thomas, H. M.; Khrapak, S. A.
2015-11-15
The longitudinal sound speed in two-dimensional Yukawa fluids is estimated using the conventional hydrodynamic expression supplemented by appropriate thermodynamic functions proposed recently by Khrapak et al. [Phys. Plasmas 22, 083706 (2015)]. In contrast to the existing approaches, such as quasi-localized charge approximation (QLCA) and molecular dynamics simulations, our model provides a relatively simple estimate for the sound speed over a wide range of parameters of interest. At strong coupling, our results are shown to be in good agreement with the results obtained using the QLCA approach and those derived from the phonon spectrum for the triangular lattice. On the othermore » hand, our model is also expected to remain accurate at moderate values of the coupling strength. In addition, the obtained results are used to discuss the influence of the strong coupling effects on the adiabatic index of two-dimensional Yukawa fluids.« less
Fine-granularity inference and estimations to network traffic for SDN.
Jiang, Dingde; Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.
Satellites for the study of ocean primary productivity
NASA Technical Reports Server (NTRS)
Smith, R. C.; Baker, K. S.
1983-01-01
The use of remote sensing techniques for obtaining estimates of global marine primary productivity is examined. It is shown that remote sensing and multiplatform (ship, aircraft, and satellite) sampling strategies can be used to significantly lower the variance in estimates of phytoplankton abundance and of population growth rates from the values obtained using the C-14 method. It is noted that multiplatform sampling strategies are essential to assess the mean and variance of phytoplankton biomass on a regional or on a global basis. The relative errors associated with shipboard and satellite estimates of phytoplankton biomass and primary productivity, as well as the increased statistical accuracy possible from the utilization of contemporaneous data from both sampling platforms, are examined. It is shown to be possible to follow changes in biomass and the distribution patterns of biomass as a function of time with the use of satellite imagery.
Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán
2012-01-01
The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.
Fine-granularity inference and estimations to network traffic for SDN
Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913
van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien
2017-07-01
Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the direct estimates at the municipal level. Structured additive regression is a useful tool to provide small area estimates in a unified framework. We are able to produce valid nationwide small area estimates of 26 health-related indicators at neighbourhood level in the Netherlands. The results can be used for local policy makers to make appropriate health policy decisions.
Optimization of Scan Parameters to Reduce Acquisition Time for Diffusion Kurtosis Imaging at 1.5T.
Yokosawa, Suguru; Sasaki, Makoto; Bito, Yoshitaka; Ito, Kenji; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Kudo, Kohsuke
2016-01-01
To shorten acquisition of diffusion kurtosis imaging (DKI) in 1.5-tesla magnetic resonance (MR) imaging, we investigated the effects of the number of b-values, diffusion direction, and number of signal averages (NSA) on the accuracy of DKI metrics. We obtained 2 image datasets with 30 gradient directions, 6 b-values up to 2500 s/mm(2), and 2 signal averages from 5 healthy volunteers and generated DKI metrics, i.e., mean, axial, and radial kurtosis (MK, K∥, and K⊥) maps, from various combinations of the datasets. These maps were estimated by using the intraclass correlation coefficient (ICC) with those from the full datasets. The MK and K⊥ maps generated from the datasets including only the b-value of 2500 s/mm(2) showed excellent agreement (ICC, 0.96 to 0.99). Under the same acquisition time and diffusion directions, agreement was better of MK, K∥, and K⊥ maps obtained with 3 b-values (0, 1000, and 2500 s/mm(2)) and 4 signal averages than maps obtained with any other combination of numbers of b-value and varied NSA. Good agreement (ICC > 0.6) required at least 20 diffusion directions in all the metrics. MK and K⊥ maps with ICC greater than 0.95 can be obtained at 1.5T within 10 min (b-value = 0, 1000, and 2500 s/mm(2); 20 diffusion directions; 4 signal averages; slice thickness, 6 mm with no interslice gap; number of slices, 12).
Hawkins, P A; Butler, P J; Woakes, A J; Speakman, J R
2000-09-01
The relationship between heart rate (f(H)) and rate of oxygen consumption (V(O2)) was established for a marine diving bird, the common eider duck (Somateria mollissima), during steady-state swimming and running exercise. Both variables increased exponentially with speed during swimming and in a linear fashion during running. Eleven linear regressions of V(O2) (ml kg(-1 )min(-1)) on f(H) (beats min(-1)) were obtained: five by swimming and six by running the birds. The common regression was described by V(O2)=10.1 + 0.15f(H) (r(2)=0.46, N=272, P<0.0001). The accuracy of this relationship for predicting mean V(O2) was determined for a group of six birds by recording f(H) continuously over a 2-day period and comparing estimated V(O2) obtained using the common regression with (i) V(O2) estimated using the doubly labelled water technique (DLW) and (ii) V(O2) measured using respirometry. A two-pool model produced the most accurate estimated V(O2) using DLW. Because of individual variability within mean values of V(O2) estimated using both techniques, there was no significant difference between mean V(O2) estimated using f(H) or DLW and measured V(O2) values (P>0.2), although individual errors were substantially less when f(H) was used rather than DLW to estimate V(O2). Both techniques are, however, only suitable for estimating mean V(O2) for a group of animals, not for individuals. Heart rate and behaviour were monitored during a bout of 63 voluntary dives by one female bird in an indoor tank 1.7 m deep. Tachycardia occurred both in anticipation of and following each dive. Heart rate decreased before submersion but was above resting values for the whole of the dive cycle. Mean f(H) at mean dive duration was significantly greater than f(H) while swimming at maximum sustainable surface speeds. Heart rate was used to estimate mean V(O2) during the dive cycle and to predict aerobic dive limit (ADL) for shallow dives.
CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan
Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less
Boucher, S E; Calsamiglia, S; Parsons, C M; Stern, M D; Moreno, M Ruiz; Vázquez-Añón, M; Schwab, C G
2009-08-01
Three soybean meal, 3 SoyPlus (West Central Cooperative, Ralston, IA), 5 distillers dried grains with solubles, and 5 fish meal samples were used to evaluate the modified 3-step in vitro procedure (TSP) and the in vitro immobilized digestive enzyme assay (IDEA; Novus International Inc., St. Louis, MO) for estimating digestibility of AA in rumen-undegraded protein (RUP-AA). In a previous experiment, each sample was ruminally incubated in situ for 16 h, and in vivo digestibility of AA in the intact samples and in the rumen-undegraded residues (RUR) was obtained for all samples using the precision-fed cecectomized rooster assay. For the modified TSP, 5 g of RUR was weighed into polyester bags, which were then heat-sealed and placed into Daisy(II) incubator bottles. Samples were incubated in a pepsin/HCl solution followed by incubation in a pancreatin solution. After this incubation, residues remaining in the bags were analyzed for AA, and digestibility of RUP-AA was calculated based on disappearance from the bags. In vitro RUP-AA digestibility estimates obtained with this procedure were highly correlated to in vivo estimates. Corresponding intact feeds were also analyzed via the pepsin/pancreatin steps of the modified TSP. In vitro estimates of AA digestibility of the feeds were highly correlated to in vivo RUP-AA digestibility, which suggests that the feeds may not need to be ruminally incubated before determining RUP-AA digestibility in vitro. The RUR were also analyzed via the IDEA kits. The IDEA values of the RUR were good predictors of RUP-AA digestibility in soybean meal, SoyPlus, and distillers dried grains with solubles, but the IDEA values were not as good predictors of RUP-AA digestibility in fish meal. However, the IDEA values of intact feed samples were also determined and were highly correlated to in vivo RUP-AA digestibility for all feed types, suggesting that the IDEA value of intact feeds may be a better predictor of RUP-AA digestibility than the IDEA value of the RUR. In conclusion, the modified TSP and IDEA kits are good approaches for estimating RUP-AA digestibility in soybean meal products, distillers dried grains with solubles, and fish meal samples.
A weak lensing analysis of the PLCK G100.2-30.4 cluster
NASA Astrophysics Data System (ADS)
Radovich, M.; Formicola, I.; Meneghetti, M.; Bartalucci, I.; Bourdin, H.; Mazzotta, P.; Moscardini, L.; Ettori, S.; Arnaud, M.; Pratt, G. W.; Aghanim, N.; Dahle, H.; Douspis, M.; Pointecouteau, E.; Grado, A.
2015-07-01
We present a mass estimate of the Planck-discovered cluster PLCK G100.2-30.4, derived from a weak lensing analysis of deep Subaru griz images. We perform a careful selection of the background galaxies using the multi-band imaging data, and undertake the weak lensing analysis on the deep (1 h) r -band image. The shape measurement is based on the Kaiser-Squires-Broadhurst algorithm; we adopt the PSFex software to model the point spread function (PSF) across the field and correct for this in the shape measurement. The weak lensing analysis is validated through extensive image simulations. We compare the resulting weak lensing mass profile and total mass estimate to those obtained from our re-analysis of XMM-Newton observations, derived under the hypothesis of hydrostatic equilibrium. The total integrated mass profiles agree remarkably well, within 1σ across their common radial range. A mass M500 ~ 7 × 1014M⊙ is derived for the cluster from our weak lensing analysis. Comparing this value to that obtained from our reanalysis of XMM-Newton data, we obtain a bias factor of (1-b) = 0.8 ± 0.1. This is compatible within 1σ with the value of (1-b) obtained in Planck 2015 from the calibration of the bias factor using newly available weak lensing reconstructed masses. Based on data collected at Subaru Telescope (University of Tokyo).
Walusimbi, Simon; Kwesiga, Brendan; Rodrigues, Rashmi; Haile, Melles; de Costa, Ayesha; Bogg, Lennart; Katamba, Achilles
2016-10-10
Microscopic Observation Drug Susceptibility (MODS) and Xpert MTB/Rif (Xpert) are highly sensitive tests for diagnosis of pulmonary tuberculosis (PTB). This study evaluated the cost effectiveness of utilizing MODS versus Xpert for diagnosis of active pulmonary TB in HIV infected patients in Uganda. A decision analysis model comparing MODS versus Xpert for TB diagnosis was used. Costs were estimated by measuring and valuing relevant resources required to perform the MODS and Xpert tests. Diagnostic accuracy data of the tests were obtained from systematic reviews involving HIV infected patients. We calculated base values for unit costs and varied several assumptions to obtain the range estimates. Cost effectiveness was expressed as costs per TB patient diagnosed for each of the two diagnostic strategies. Base case analysis was performed using the base estimates for unit cost and diagnostic accuracy of the tests. Sensitivity analysis was performed using a range of value estimates for resources, prevalence, number of tests and diagnostic accuracy. The unit cost of MODS was US$ 6.53 versus US$ 12.41 of Xpert. Consumables accounted for 59 % (US$ 3.84 of 6.53) of the unit cost for MODS and 84 % (US$10.37 of 12.41) of the unit cost for Xpert. The cost effectiveness ratio of the algorithm using MODS was US$ 34 per TB patient diagnosed compared to US$ 71 of the algorithm using Xpert. The algorithm using MODS was more cost-effective compared to the algorithm using Xpert for a wide range of different values of accuracy, cost and TB prevalence. The cost (threshold value), where the algorithm using Xpert was optimal over the algorithm using MODS was US$ 5.92. MODS versus Xpert was more cost-effective for the diagnosis of PTB among HIV patients in our setting. Efforts to scale-up MODS therefore need to be explored. However, since other non-economic factors may still favour the use of Xpert, the current cost of the Xpert cartridge still needs to be reduced further by more than half, in order to make it economically competitive with MODS.
NASA Astrophysics Data System (ADS)
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
Calculations of Hubbard U from first-principles
NASA Astrophysics Data System (ADS)
Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.
2006-09-01
The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.
NASA Technical Reports Server (NTRS)
Emmons, T. E.
1976-01-01
The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.
Estimation of eye lens doses received by pediatric interventional cardiologists.
Alejo, L; Koren, C; Ferrer, C; Corredoira, E; Serrada, A
2015-09-01
Maximum Hp(0.07) dose to the eye lens received in a year by the pediatric interventional cardiologists has been estimated. Optically stimulated luminescence dosimeters were placed on the eyes of an anthropomorphic phantom, whose position in the room simulates the most common irradiation conditions. Maximum workload was considered with data collected from procedures performed in the Hospital. None of the maximum values obtained exceed the dose limit of 20 mSv recommended by ICRP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Momentum transfer in relativistic heavy ion charge-exchange reactions
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Khan, F.; Khandelwal, G. S.
1991-01-01
Relativistic heavy ion charge-exchange reactions yield fragments (Delta-Z = + 1) whose longitudinal momentum distributions are downshifted by larger values than those associated with the remaining fragments (Delta-Z = 1, -2,...). Kinematics alone cannot account for the observed downshifts; therefore, an additional contribution from collision dynamics must be included. In this work, an optical model description of collision momentum transfer is used to estimate the additional dynamical momentum downshift. Good agreement between theoretical estimates and experimental data is obtained.
Advance Technology Satellites in the Commercial Environment. Volume 2: Final Report
NASA Technical Reports Server (NTRS)
1984-01-01
A forecast of transponder requirements was obtained. Certain assumptions about system configurations are implicit in this process. The factors included are interpolation of baseline year values to produce yearly figures, estimation of satellite capture, effects of peak-hours and the time-zone staggering of peak hours, circuit requirements for acceptable grade of service capacity of satellite transponders, including various compression methods where applicable, and requirements for spare transponders in orbit. The graphical distribution of traffic requirements was estimated.
Determination of HART I Blade Structural Properties by Laboratory Testing
NASA Technical Reports Server (NTRS)
Jung, Sung N.; Lau, Benton H.
2012-01-01
The structural properties of higher harmonic Aeroacoustic Rotor Test (HART I) blades were measured using the original set of blades tested in the German-dutch wind tunnel (DNW) in 1994. the measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. the measured properties were compared to the estimated values obtained initially from the blade manufacturer. The previously estimated blade properties showed consistently higher stiffness, up to 30 percent for the flap bending in the blade inboard root section.
NASA Astrophysics Data System (ADS)
Kang, Kwang-Song; Hu, Nai-Lian; Sin, Chung-Sik; Rim, Song-Ho; Han, Eun-Cheol; Kim, Chol-Nam
2017-08-01
It is very important to obtain the mechanical paramerters of rock mass for excavation design, support design, slope design and stability analysis of the underground structure. In order to estimate the mechanical parameters of rock mass exactly, a new method of combining a geological strength index (GSI) system with intelligent displacment back analysis is proposed in this paper. Firstly, average spacing of joints (d) and rock mass block rating (RBR, a new quantitative factor), surface condition rating (SCR) and joint condition factor (J c) are obtained on in situ rock masses using the scanline method, and the GSI values of rock masses are obtained from a new quantitative GSI chart. A correction method of GSI value is newly introduced by considering the influence of joint orientation and groundwater on rock mass mechanical properties, and then value ranges of rock mass mechanical parameters are chosen by the Hoek-Brown failure criterion. Secondly, on the basis of the measurement result of vault settlements and horizontal convergence displacements of an in situ tunnel, optimal parameters are estimated by combination of genetic algorithm (GA) and numerical simulation analysis using FLAC3D. This method has been applied in a lead-zinc mine. By utilizing the improved GSI quantization, correction method and displacement back analysis, the mechanical parameters of the ore body, hanging wall and footwall rock mass were determined, so that reliable foundations were provided for mining design and stability analysis.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
Acer, Niyazi; Sahin, Bunyamin; Ucar, Tolga; Usanmaz, Mustafa
2009-01-01
The size of the eyeball has been the subject of a few studies. None of them used stereological methods to estimate the volume. In the current study, we estimated the volume of eyeball in normal men and women using the stereological methods. Eyeball volume (EV) was estimated using the Cavalieri principle as a combination of point-counting and planimetry techniques. We used computed tomography scans taken from 36 participants (15 men and 21 women) to estimate the EV. The mean (SD) EV values obtained by planimetry method were 7.49 (0.79) and 7.06 (0.85) cm in men and women, respectively. By using point-counting method, the mean (SD) values were 7.48 (0.85) and 7.21 (0.84) cm in men and women, respectively. There was no statistically significant difference between the findings from the 2 methods (P > 0.05). A weak correlation was found between the axial length of eyeball and the EV estimated by point counting and planimetry (P < 0.05, r = 0.494 and r = 0.523, respectively). The findings of the current study using the stereological methods could provide data for the evaluation of normal and pathologic volumes of the eyeball.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
NASA Astrophysics Data System (ADS)
Libonati, R.; Dacamara, C. C.; Setzer, A. W.; Morelli, F.
2014-12-01
A procedure is presented that allows using information from the MODerate resolution Imaging Spectroradiometer (MODIS) sensor to improve the quality of monthly burned area estimates over Brazil. The method integrates MODIS derived information from two sources; the NASA MCD64A1 Direct Broadcast Monthly Burned Area Product and INPE's Monthly Burned Area MODIS product (AQM-MODIS). The latter product relies on an algorithm that was specifically designed for ecosystems in Brazil, taking advantage of the ability of MIR reflectances to discriminate burned areas. Information from both MODIS products is incorporated by means of a linear regression model where an optimal estimate of the burned area is obtained as a linear combination of burned area estimates from MCD64A1 and AQM-MODIS. The linear regression model is calibrated using as optimal estimates values of burned area derived from Landsat TM during 2005 and 2006 over Jalapão, a region of Cerrado covering an area of 187 x 187 km2. Obtained values of coefficients for MCD64A1 and AQM-MODIS were 0.51 and 0.35, respectively and the root mean square error was 7.6 km2. Robustness of the model was checked by calibrating the model separately for 2005 and 2006 and cross-validating with 2006 and 2005; coefficients for 2005 (2006) were 0.46 (0.54) for MCD64A1 and 0.35 (0.35) for AQM-MODIS and the corresponding root mean square errors for 2006 (2005) were 7.8 (7.4) km2. The linear model was then applied to Brazil as well as to the six Brazilian main biomes, namely Cerrado, Amazônia, Caatinga, Pantanal, Mata Atlântica and Pampa. As to be expected the interannual variability based on the proposed synergistic use of MCD64A1, AQM-MODIS and Landsat Tm data for the period 2005-2010 presents marked differences with the corresponding amounts derived from MCD64A1 alone. For instance during the considered period, values (in 103 km2) from the proposed approach (from MCD64A1) are 399 (142), 232 (62), 559 (259), 274 (73), 219 (31) and 415 (251). Values obtained with the proposed approach may be viewed as an improved alternative to the currently available products over Brazil.
A comparative review of estimates of the proportion unchanged genes and the false discovery rate
Broberg, Per
2005-01-01
Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR) and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information regarding false positive and negative rates as well as the proportion unchanged when identifying changed genes. PMID:16086831
Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS
NASA Astrophysics Data System (ADS)
Tobin, Kenneth J.; Torres, Roberto; Crow, Wade T.; Bennett, Marvin E.
2017-09-01
This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI). These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined). All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI) that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T). We examined five different eras (1997-2002; 2002-2005; 2005-2008; 2008-2011; 2011-2014) that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM) program (25 cm), Soil Climate Analysis Network (SCAN; 20.32 cm), SNOwpack TELemetry (SNOTEL; 20.32 cm), and the US Climate Reference Network (USCRN; 20 cm). We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM). Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5). The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE) and calculation based on the normalized difference vegetation index (NDVI) value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally outperformed NDVI-based estimates. The best results were noted at stations that had an absolute bias within 10 %. SWI estimates were more impacted by the in situ network than the surface satellite product used to drive the exponential filter. The average Nash-Sutcliffe coefficients (NSs) for ARM ranged from -0. 1 to 0.3 and were similar to the results obtained from the USCRN network (0.2-0.3). NS values from the SCAN and SNOTEL networks were slightly higher (0.1-0.5). These results indicated that this approach had some skill in providing an estimate of RZSM. In terms of RMSE (in volumetric soil moisture), ARM values actually outperformed those from other networks (0.02-0.04). SCAN and USCRN RMSE average values ranged from 0.04 to 0.06 and SNOTEL average RMSE values were higher (0.05-0.07). These values were close to 0.04, which is the baseline value for accuracy designated for many satellite soil moisture missions.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
Electron Affinity of Phenyl-C61-Butyric Acid Methyl Ester (PCBM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Bryon W.; Whitaker, James B.; Wang, Xue B.
2013-07-25
The gas-phase electron affinity (EA) of phenyl-C61-butyric acid methyl ester (PCBM), one of the best-performing electron acceptors in organic photovoltaic devices, is measured by lowtemperature photoelectron spectroscopy for the first time. The obtained value of 2.63(1) eV is only ca. 0.05 eV lower than that of C60 (2.68(1) eV), compared to a 0.09 V difference in their E1/2 values measured in this work by cyclic voltammetry. Literature E(LUMO) values for PCBM that are typically estimated from cyclic voltammetry, and commonly used as a quantitative measure of acceptor properties, are dispersed over a wide range between -4.3 and -3.62 eV; themore » reasons for such a huge discrepancy are analyzed here, and the protocol for reliable and consistent estimations of relative fullerene-based acceptor strength in solution is proposed.« less
Tomiyama, Yuuki; Manabe, Osamu; Oyama-Manabe, Noriko; Naya, Masanao; Sugimori, Hiroyuki; Hirata, Kenji; Mori, Yuki; Tsutsui, Hiroyuki; Kudo, Kohsuke; Tamaki, Nagara; Katoh, Chietsugu
2015-09-01
To develop and validate a method for quantifying myocardial blood flow (MBF) using dynamic perfusion magnetic resonance imaging (MBFMRI ) at 3.0 Tesla (T) and compare the findings with those of (15) O-water positron emission tomography (MBFPET ). Twenty healthy male volunteers underwent magnetic resonance imaging (MRI) and (15) O-water positron emission tomography (PET) at rest and during adenosine triphosphate infusion. The single-tissue compartment model was used to estimate the inflow rate constant (K1). We estimated the extraction fraction of Gd-DTPA using K1 and MBF values obtained from (15) O-water PET for the first 10 subjects. For validation, we calculated MBFMRI values for the remaining 10 subjects and compared them with the MBFPET values. In addition, we compared MBFMRI values of 10 patients with coronary artery disease with those of healthy subjects. The mean resting and stress MBFMRI values were 0.76 ± 0.10 and 3.04 ± 0.82 mL/min/g, respectively, and showed excellent correlation with the mean MBFPET values (r = 0.96, P < 0.01). The mean stress MBFMRI value was significantly lower for the patients (1.92 ± 0.37) than for the healthy subjects (P < 0.001). The use of dynamic perfusion MRI at 3T is useful for estimating MBF and can be applied for patients with coronary artery disease. © 2014 Wiley Periodicals, Inc.
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
NASA Astrophysics Data System (ADS)
Preziosi, E.; Sánchez, S.; González, A. J.; Pani, R.; Borrazzo, C.; Bettiol, M.; Rodriguez-Alvarez, M. J.; González-Montoro, A.; Moliner, L.; Benlloch, J. M.
2016-12-01
One of the technical objectives of the MindView project is developing a brain-dedicated PET insert based on monolithic scintillation crystals. It will be inserted in MRI systems with the purpose to obtain simultaneous PET and MRI brain images. High sensitivity, high image quality performance and accurate detection of the Depth-of-Interaction (DoI) of the 511keV photons are required. We have developed a DoI estimation method, dedicated to monolithic scintillators, allowing continuous DoI estimation and a DoI-dependent algorithm for the estimation of the photon planar impact position, able to improve the single module imaging capabilities. In this work, through experimental measurements, the proposed methods have been used for the estimation of the impact positions within the monolithic crystal block. We have evaluated the PET system performance following the NEMA NU 4-2008 protocol by reconstructing the images using the STIR 3D platform. The results obtained with two different methods, providing discrete and continuous DoI information, are compared with those obtained from an algorithm without DoI capabilities and with the ideal response of the detector. The proposed DoI-dependent imaging methods show clear improvements in the spatial resolution (FWHM) of reconstructed images, allowing to obtain values from 2mm (at the center FoV) to 3mm (at the FoV edges).
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
Idris, Ali Mohamed; Vani, Nandimandalam Venkata; Almutari, Dhafi A; Jafar, Mohammed A; Boreak, Nezar
2016-12-01
To determine the amount of sugar and pH in commercially available soft drinks in Jazan, Saudi Arabia. This was further compared with their labeled values in order to inform the regulations. The effects of these drinks on teeth is reviewed. Ten brands of popular soft drinks including 6 regular carbonated drinks and 4 energy drinks were obtained from the local markets. Their pH was determined using a pH meter. The amount of total sugar, glucose, fructose, and sucrose was estimated using high performance liquid chromatography (using Dionex ICS 5000 ion chromatography) at the Saudi Food and Drug Authority. Descriptive statistics was done to obtain the mean and standard deviation. Intergroup comparison was performed using independent t -test, and the labeled and estimated values within the group were compared with paired t -test. The labeled and estimated sugar in energy drinks (14.3 ± 0.48 and 15.6 ± 2.3, respectively) were higher than the carbonated drinks (11.2 ± 0.46 and 12.8 ± 0.99), which was statistically significant. In addition, there was a significant difference in the concentration of glucose in energy drinks (5.7 ± 1.7) compared to carbonated drinks (4.1 ± 1.4). The pH of these drinks ranged from 2.4 to 3.2. The differences between the estimated and labeled sugar in carbonated drinks showed statistical significance. Mild variation was observed in total sugar, glucose, fructose, and sucrose levels among different bottles of the same brand of these drinks. The low pH and high sugar content in these drinks are detrimental to dental health. Comparison of the estimated sugar with their labeled values showed variation in most of the brands. Preventive strategies should be implemented to reduce the health risks posed by these soft drinks.
Idris, Ali Mohamed; Vani, Nandimandalam Venkata; Almutari, Dhafi A.; Jafar, Mohammed A.; Boreak, Nezar
2016-01-01
Objective: To determine the amount of sugar and pH in commercially available soft drinks in Jazan, Saudi Arabia. This was further compared with their labeled values in order to inform the regulations. The effects of these drinks on teeth is reviewed. Materials and Methods: Ten brands of popular soft drinks including 6 regular carbonated drinks and 4 energy drinks were obtained from the local markets. Their pH was determined using a pH meter. The amount of total sugar, glucose, fructose, and sucrose was estimated using high performance liquid chromatography (using Dionex ICS 5000 ion chromatography) at the Saudi Food and Drug Authority. Descriptive statistics was done to obtain the mean and standard deviation. Intergroup comparison was performed using independent t-test, and the labeled and estimated values within the group were compared with paired t-test. Results: The labeled and estimated sugar in energy drinks (14.3 ± 0.48 and 15.6 ± 2.3, respectively) were higher than the carbonated drinks (11.2 ± 0.46 and 12.8 ± 0.99), which was statistically significant. In addition, there was a significant difference in the concentration of glucose in energy drinks (5.7 ± 1.7) compared to carbonated drinks (4.1 ± 1.4). The pH of these drinks ranged from 2.4 to 3.2. The differences between the estimated and labeled sugar in carbonated drinks showed statistical significance. Mild variation was observed in total sugar, glucose, fructose, and sucrose levels among different bottles of the same brand of these drinks. Conclusion: The low pH and high sugar content in these drinks are detrimental to dental health. Comparison of the estimated sugar with their labeled values showed variation in most of the brands. Preventive strategies should be implemented to reduce the health risks posed by these soft drinks. PMID:28217536
Feasibility test of a solid state spin-scan photo-imaging system
NASA Technical Reports Server (NTRS)
Laverty, N. P.
1973-01-01
The feasibility of using a solid-state photo-imaging system to obtain resolution imagery from a Pioneer-type spinning spacecraft in future exploratory missions to the outer planets is discussed. Evaluation of the photo-imaging system performance, based on electrical video signal analysis recorded on magnetic tape, shows that the signal-to-noise (S/N) ratios obtained at low spatial frequencies exceed the anticipated performance and that measured modulation transfer functions exhibited some degradation in comparison with the estimated values, primarily owing to the difficulty in obtaining a precise focus of the optical system in the laboratory with the test patterns in close proximity to the objective lens. A preliminary flight model design of the photo-imaging system is developed based on the use of currently available phototransistor arrays. Image quality estimates that will be obtained are presented in terms of S/N ratios and spatial resolution for the various planets and satellites. Parametric design tradeoffs are also defined.
Richard, Vincent; Lamberto, Giuliano; Lu, Tung-Wu; Cappozzo, Aurelio; Dumas, Raphaël
2016-01-01
The use of multi-body optimisation (MBO) to estimate joint kinematics from stereophotogrammetric data while compensating for soft tissue artefact is still open to debate. Presently used joint models embedded in MBO, such as mechanical linkages, constitute a considerable simplification of joint function, preventing a detailed understanding of it. The present study proposes a knee joint model where femur and tibia are represented as rigid bodies connected through an elastic element the behaviour of which is described by a single stiffness matrix. The deformation energy, computed from the stiffness matrix and joint angles and displacements, is minimised within the MBO. Implemented as a "soft" constraint using a penalty-based method, this elastic joint description challenges the strictness of "hard" constraints. In this study, estimates of knee kinematics obtained using MBO embedding four different knee joint models (i.e., no constraints, spherical joint, parallel mechanism, and elastic joint) were compared against reference kinematics measured using bi-planar fluoroscopy on two healthy subjects ascending stairs. Bland-Altman analysis and sensitivity analysis investigating the influence of variations in the stiffness matrix terms on the estimated kinematics substantiate the conclusions. The difference between the reference knee joint angles and displacements and the corresponding estimates obtained using MBO embedding the stiffness matrix showed an average bias and standard deviation for kinematics of 0.9±3.2° and 1.6±2.3 mm. These values were lower than when no joint constraints (1.1±3.8°, 2.4±4.1 mm) or a parallel mechanism (7.7±3.6°, 1.6±1.7 mm) were used and were comparable to the values obtained with a spherical joint (1.0±3.2°, 1.3±1.9 mm). The study demonstrated the feasibility of substituting an elastic joint for more classic joint constraints in MBO.