Sample records for standard estimation methods

  1. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  2. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  3. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    PubMed

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.

  4. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  5. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  6. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  7. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  8. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  9. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  10. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  11. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  12. Comparison of estimators of standard deviation for hydrologic time series

    USGS Publications Warehouse

    Tasker, Gary D.; Gilroy, Edward J.

    1982-01-01

    Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.

  13. Apparent annual survival estimates of tropical songbirds better reflect life history variation when based on intensive field methods

    USGS Publications Warehouse

    Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.

    2017-01-01

    AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.

  14. Method for Estimating Evaporative Potential (IM/CLO) from ASTM Standard Single Wind Velocity Measures

    DTIC Science & Technology

    2016-08-10

    IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY MEASURES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL REPORT T16-14 METHOD FOR ESTIMATING EVAPORATIVE POTENTIAL (IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY...ASTM STANDARD SINGLE WIND VELOCITY MEASURES Adam W. Potter Biophysics and Biomedical Modeling Division U.S. Army Research Institute of Environmental

  15. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  16. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  17. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  18. Estimates of monthly streamflow characteristics at selected sites in the upper Missouri River basin, Montana, base period water years 1937-86

    USGS Publications Warehouse

    Parrett, Charles; Johnson, D.R.; Hull, J.A.

    1989-01-01

    Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)

  19. Toward unbiased estimations of the statefinder parameters

    NASA Astrophysics Data System (ADS)

    Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando

    2017-09-01

    With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.

  20. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  1. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  2. Assessing network scale-up estimates for groups most at risk of HIV/AIDS: evidence from a multiple-method study of heavy drug users in Curitiba, Brazil.

    PubMed

    Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I

    2011-11-15

    One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.

  3. An accurate estimation method of kinematic viscosity for standard viscosity liquids

    NASA Astrophysics Data System (ADS)

    Kurano, Y.; Kobayashi, H.; Yoshida, K.; Imai, H.

    1992-07-01

    Deming's method of least squares is introduced to make an accurate kinematic viscosity estimation for a series of 13 standard-viscosity liquids at any desired temperature. The empirical ASTM kinematic viscosity-temperature equation is represented in the form loglog( v+c)=a-b log T, where v (in mm2. s-1) is the kinematic viscosity at temperature T (in K), a and b are the constants for a given liquid, and c has a variable value. In the present application, however, c is assumed to have a constant value for each standard-viscosity liquid, as do a and b in the ASTM equation. This assumption has since been verified experimentally for all standard-viscosity liquids. The kinematic viscosities for the 13 standard-viscosity liquids have been measured with a high accuracy in the temperature range of 20 40°C using a series of the NRLM capillary master viscometers with an automatic flow time detection system. The deviations between measured and estimated kinematic viscosities were less than ±0.04% for the 10 standard-viscosity liquids JS2.5 to JS2000 and ±0.11% for the 3 standard-viscosity liquids JS15H to JS200H, respectively. From the above investigation, it was revealed that the uncertainty in the present estimation method is less than one-third that in the usual ASTM method.

  4. An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.

    PubMed

    Obuchowski, Nancy A

    2006-02-15

    ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.

  5. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  6. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  7. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  8. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  9. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    PubMed

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Parameter estimation in Cox models with missing failure indicators and the OPPERA study.

    PubMed

    Brownstein, Naomi C; Cai, Jianwen; Slade, Gary D; Bair, Eric

    2015-12-30

    In a prospective cohort study, examining all participants for incidence of the condition of interest may be prohibitively expensive. For example, the "gold standard" for diagnosing temporomandibular disorder (TMD) is a physical examination by a trained clinician. In large studies, examining all participants in this manner is infeasible. Instead, it is common to use questionnaires to screen for incidence of TMD and perform the "gold standard" examination only on participants who screen positively. Unfortunately, some participants may leave the study before receiving the "gold standard" examination. Within the framework of survival analysis, this results in missing failure indicators. Motivated by the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, a large cohort study of TMD, we propose a method for parameter estimation in survival models with missing failure indicators. We estimate the probability of being an incident case for those lacking a "gold standard" examination using logistic regression. These estimated probabilities are used to generate multiple imputations of case status for each missing examination that are combined with observed data in appropriate regression models. The variance introduced by the procedure is estimated using multiple imputation. The method can be used to estimate both regression coefficients in Cox proportional hazard models as well as incidence rates using Poisson regression. We simulate data with missing failure indicators and show that our method performs as well as or better than competing methods. Finally, we apply the proposed method to data from the OPPERA study. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Product line cost estimation: a standard cost approach.

    PubMed

    Cooper, J C; Suver, J D

    1988-04-01

    Product line managers often must make decisions based on inaccurate cost information. A method is needed to determine costs more accurately. By using a standard costing model, product line managers can better estimate the cost of intermediate and end products, and hence better estimate the costs of the product line.

  12. Estimating alcohol content of traditional brew in Western Kenya using culturally relevant methods: the case for cost over volume.

    PubMed

    Papas, Rebecca K; Sidle, John E; Wamalwa, Emmanuel S; Okumu, Thomas O; Bryant, Kendall L; Goulet, Joseph L; Maisto, Stephen A; Braithwaite, R Scott; Justice, Amy C

    2010-08-01

    Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang'aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang'aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang'aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang'aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard.

  13. Oxygen transfer rate estimation in oxidation ditches from clean water measurements.

    PubMed

    Abusam, A; Keesman, K J; Meinema, K; Van Straten, G

    2001-06-01

    Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).

  14. Simultaneous Estimation of Ofloxacin, Clotrimazole, and Lignocaine Hydrochloride in Their Combined Ear-Drop Formulation by Two Spectrophotometric Methods.

    PubMed

    Bodiwala, Kunjan; Shah, Shailesh; Patel, Yogini; Prajapati, Pintu; Marolia, Bhavin; Kalyankar, Gajanan

    2017-01-01

    Two sensitive, accurate, and precise spectrophotometric methods have been developed and validated for the simultaneous estimation of ofloxacin (OFX), clotrimazole (CLZ), and lignocaine hydrochloride (LGN) in their combined dosage form (ear drops) without prior separation. The derivative ratio spectra method (method 1) includes the measurement of OFX and CLZ at zero-crossing points (ZCPs) of each other obtained from the ratio derivative spectra using standard LGN as a divisor, whereas the measurement of LGN at the ZCP of CLZ is obtained from the ratio derivative spectra using standard OFX as a divisor. The double divisor-ratio derivative method (method 2) includes the measurement of each drug at its amplitude in the double divisor-ratio spectra obtained using a standard mixture of the other two drugs as the divisor. Both methods were found to be linear (correlation coefficients of >0.996) over the ranges of 3-15, 10-50, and 20-100 μg/mL for OFX, CLZ, and LGN, respectively; precise (RSD of <2%); and accurate (recovery of >98%) for the estimation of each drug. The developed methods were successfully applied for the estimation of these drugs in a marketed ear-drop formulation. Excipients and other ingredients did not interfere with the estimation of these drugs. Both methods were statistically compared using the t-test.

  15. Validation of five minimally obstructive methods to estimate physical activity energy expenditure in young adults in semi-standardized settings.

    PubMed

    Schneller, Mikkel B; Pedersen, Mogens T; Gupta, Nidhi; Aadahl, Mette; Holtermann, Andreas

    2015-03-13

    We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen participants performed a standardized and semi-standardized protocol including seven daily life activity types, while having their EE measured by indirect calorimetry. Simultaneously, physical activity was quantified by an ActivPAL3, two ActiGraph GT3X+'s and an Actiheart. EE was estimated by the standard ActivPAL3 software (ActivPAL), ActiGraph GT3X+ (ActiGraph) and Actiheart (Actiheart), and by a combination of activity type recognition via Acti4 software and activity counts per minute (CPM) of either a hip- or thigh-worn ActiGraph GT3X+ (AGhip + Acti4 and AGthigh + Acti4). At group level, estimated physical activities EE by Actiheart (MSE = 2.05) and AGthigh + Acti4 (MSE = 0.25) were not significantly different from measured EE by indirect calorimetry, while significantly underestimated by ActiGraph, ActivPAL and AGhip + Acti4. AGthigh + Acti4 and Actiheart explained 77% and 45%, of the individual variations in measured physical activity EE by indirect calorimetry, respectively. This study concludes that combining accelerometer data from a thigh-worn ActiGraph GT3X+ with activity type recognition improved the accuracy of activity specific EE estimation against indirect calorimetry in semi-standardized settings compared to previously validated methods using CPM only.

  16. Consistent Estimation of Gibbs Energy Using Component Contributions

    PubMed Central

    Milo, Ron; Fleming, Ronan M. T.

    2013-01-01

    Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism. PMID:23874165

  17. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    PubMed Central

    Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231

  18. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  19. A Tool for Estimating Variability in Wood Preservative Treatment Retention

    Treesearch

    Patricia K. Lebow; Adam M. Taylor; Timothy M. Young

    2015-01-01

    Composite sampling is standard practice for evaluation of preservative retention levels in preservative-treated wood. Current protocols provide an average retention value but no estimate of uncertainty. Here we describe a statistical method for calculating uncertainty estimates using the standard sampling regime with minimal additional chemical analysis. This tool can...

  20. Methods for determining time of death.

    PubMed

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  1. Robust power spectral estimation for EEG data

    PubMed Central

    Melman, Tamar; Victor, Jonathan D.

    2016-01-01

    Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041

  2. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  3. Using flow cytometry to estimate pollen DNA content: improved methodology and applications

    PubMed Central

    Kron, Paul; Husband, Brian C.

    2012-01-01

    Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815

  4. Daily sodium and potassium excretion can be estimated by scheduled spot urine collections.

    PubMed

    Doenyas-Barak, Keren; Beberashvili, Ilia; Bar-Chaim, Adina; Averbukh, Zhan; Vogel, Ofir; Efrati, Shai

    2015-01-01

    The evaluation of sodium and potassium intake is part of the optimal management of hypertension, metabolic syndrome, renal stones, and other conditions. To date, no convenient method for its evaluation exists, as the gold standard method of 24-hour urine collection is cumbersome and often incorrectly performed, and methods that use spot or shorter collections are not accurate enough to replace the gold standard. The aim of this study was to evaluate the correlation and agreement between a new method that uses multiple-scheduled spot urine collection and the gold standard method of 24-hour urine collection. The urine sodium or potassium to creatinine ratios were determined for four scheduled spot urine samples. The mean ratios of the four spot samples and the ratios of each of the single spot samples were corrected for estimated creatinine excretion and compared to the gold standard. A significant linear correlation was demonstrated between the 24-hour urinary solute excretions and estimated excretion evaluated by any of the scheduled spot urine samples. The correlation of the mean of the four spots was better than for any of the single spots. Bland-Altman plots showed that the differences between these measurements were within the limits of agreement. Four scheduled spot urine samples can be used as a convenient method for estimation of 24-hour sodium or potassium excretion. © 2015 S. Karger AG, Basel.

  5. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  6. COMPARISON OF TAXONOMIC, COLONY MORPHOTYPE AND PCR-RFLP METHODS TO CHARACTERIZE MICROFUNGAL DIVERSITY

    EPA Science Inventory

    We compared three methods for estimating fungal species diversity in soil samples. A rapid screening method based on gross colony morphological features and color reference standards was compared with traditional fungal taxonomic methods and PCR-RFLP for estimation of ecological ...

  7. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  8. Estimating Alcohol Content of Traditional Brew in Western Kenya Using Culturally Relevant Methods: The Case for Cost Over Volume

    PubMed Central

    Sidle, John E.; Wamalwa, Emmanuel S.; Okumu, Thomas O.; Bryant, Kendall L.; Goulet, Joseph L.; Maisto, Stephen A.; Braithwaite, R. Scott; Justice, Amy C.

    2010-01-01

    Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang’aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang’aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang’aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang’aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard. PMID:19015972

  9. Estimating airline operating costs

    NASA Technical Reports Server (NTRS)

    Maddalon, D. V.

    1978-01-01

    A review was made of the factors affecting commercial aircraft operating and delay costs. From this work, an airline operating cost model was developed which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model, similar in some respects to the standard Air Transport Association of America (ATA) Direct Operating Cost Model, permits estimates of aircraft-related costs not now included in the standard ATA model (e.g., aircraft service, landing fees, flight attendants, and control fees). A study of the cost of aircraft delay was also made and a method for estimating the cost of certain types of airline delay is described.

  10. Robust power spectral estimation for EEG data.

    PubMed

    Melman, Tamar; Victor, Jonathan D

    2016-08-01

    Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Accuracy of Blood Loss Measurement during Cesarean Delivery.

    PubMed

    Doctorvaladan, Sahar V; Jelks, Andrea T; Hsieh, Eric W; Thurer, Robert L; Zakowski, Mark I; Lagrew, David C

    2017-04-01

    Objective  This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design  In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland-Altman method. Results  Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R 2  = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R 2  = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R 2  = 0.304). Conclusion  During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes.

  12. Accuracy of Blood Loss Measurement during Cesarean Delivery

    PubMed Central

    Doctorvaladan, Sahar V.; Jelks, Andrea T.; Hsieh, Eric W.; Thurer, Robert L.; Zakowski, Mark I.; Lagrew, David C.

    2017-01-01

    Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland–Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes. PMID:28497007

  13. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  14. Comparison of five methods for the estimation of methane production from vented in vitro systems.

    PubMed

    Alvarez Hess, P S; Eckard, R J; Jacobs, J L; Hannah, M C; Moate, P J

    2018-05-23

    There are several methods for estimating methane production (MP) from feedstuffs in vented in vitro systems. One method (A; "gold standard") measures methane proportions in the incubation bottle's head space (HS) and in the vented gas collected in gas bags. Four other methods (B, C, D and E) measure methane proportion in a single gas sample from HS. Method B assumes the same methane proportion in the vented gas as in HS, method C assumes constant methane to carbon dioxide ratio, method D has been developed based on empirical data and method E assumes constant individual venting volumes. This study aimed to compare the MP predictions from these methods to that of the gold standard method under different incubation scenarios, to validate these methods based on their concordance with a gold standard method. Methods C, D and E had greater concordance (0.85, 0.88 and 0.81), lower root mean square error (RMSE) (0.80, 0.72 and 0.85) and lower mean bias (0.20, 0.35, -0.35) with the gold standard than did method B (concordance 0.67, RMSE 1.49 and mean bias 1.26). Methods D and E were simpler to perform than method C and method D was slightly more accurate than method E. Based on precision, accuracy and simplicity of implementation, it is recommended that, when method A cannot be used, methods D and E are preferred to estimate MP from vented in vitro systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Kappa statistic for the clustered dichotomous responses from physicians and patients

    PubMed Central

    Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L.; Cai, Jianwen

    2013-01-01

    The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared to the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. An example of an application to a coronary heart disease prevention study is presented. PMID:23533082

  16. SAS and SPSS macros to calculate standardized Cronbach's alpha using the upper bound of the phi coefficient for dichotomous items.

    PubMed

    Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy

    2007-02-01

    Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.

  17. Standard free energy of formation of iron iodide

    NASA Technical Reports Server (NTRS)

    Khandkar, A.; Tare, V. B.; Wagner, J. B., Jr.

    1983-01-01

    An experiment is reported where silver iodide is used to determine the standard free energy of formation of iron iodide. By using silver iodide as a solid electrolyte, a galvanic cell, Ag/AgI/Fe-FeI2, is formulated. The standard free energy of formation of AgI is known, and hence it is possible to estimate the standard free energy of formation of FeI2 by measuring the open-circuit emf of the above cell as a function of temperature. The free standard energy of formation of FeI2 determined by this method is -38784 + 24.165T cal/mol. It is estimated that the maximum error associated with this method is plus or minus 2500 cal/mol.

  18. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  19. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  20. A new method to address verification bias in studies of clinical screening tests: cervical cancer screening assays as an example.

    PubMed

    Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D

    2014-03-01

    Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  2. Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2015-02-01

    We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential nonstationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance, environmental science, and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales.

  3. A Fresh Start for Flood Estimation in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Woods, R. A.

    2017-12-01

    The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for flood estimation! A shift to new methods for flood estimation will not be taken lightly by practitioners. However, the standard for change is clear - can we develop new methods which give significant improvements in reliability over those existing methods which are demonstrably unsatisfactory?

  4. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  5. Optimizing fish sampling for fish - mercury bioaccumulation factors

    USGS Publications Warehouse

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.

    2015-01-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.

  6. Estimating the Cost of Standardized Student Testing in the United States.

    ERIC Educational Resources Information Center

    Phelps, Richard P.

    2000-01-01

    Describes and contrasts different methods of estimating costs of standardized testing. Using a cost-accounting approach, compares gross and marginal costs and considers testing objects (test materials and services, personnel and student time, and administrative/building overhead). Social marginal costs of replacing existing tests with a national…

  7. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  8. Walking Distance Estimation Using Walking Canes with Inertial Sensors

    PubMed Central

    Suh, Young Soo

    2018-01-01

    A walking distance estimation algorithm for cane users is proposed using an inertial sensor unit attached to various positions on the cane. A standard inertial navigation algorithm using an indirect Kalman filter was applied to update the velocity and position of the cane during movement. For quadripod canes, a standard zero-velocity measurement-updating method is proposed. For standard canes, a velocity-updating method based on an inverted pendulum model is proposed. The proposed algorithms were verified by three walking experiments with two different types of canes and different positions of the sensor module. PMID:29342971

  9. High-resolution global grids of revised Priestley-Taylor and Hargreaves-Samani coefficients for assessing ASCE-standardized reference crop evapotranspiration and solar radiation

    NASA Astrophysics Data System (ADS)

    Aschonitis, Vassilis G.; Papamichail, Dimitris; Demertzi, Kleoniki; Colombani, Nicolo; Mastrocicco, Micol; Ghirardini, Andrea; Castaldelli, Giuseppe; Fano, Elisa-Anna

    2017-08-01

    The objective of the study is to provide global grids (0.5°) of revised annual coefficients for the Priestley-Taylor (P-T) and Hargreaves-Samani (H-S) evapotranspiration methods after calibration based on the ASCE (American Society of Civil Engineers)-standardized Penman-Monteith method (the ASCE method includes two reference crops: short-clipped grass and tall alfalfa). The analysis also includes the development of a global grid of revised annual coefficients for solar radiation (Rs) estimations using the respective Rs formula of H-S. The analysis was based on global gridded climatic data of the period 1950-2000. The method for deriving annual coefficients of the P-T and H-S methods was based on partial weighted averages (PWAs) of their mean monthly values. This method estimates the annual values considering the amplitude of the parameter under investigation (ETo and Rs) giving more weight to the monthly coefficients of the months with higher ETo values (or Rs values for the case of the H-S radiation formula). The method also eliminates the effect of unreasonably high or low monthly coefficients that may occur during periods where ETo and Rs fall below a specific threshold. The new coefficients were validated based on data from 140 stations located in various climatic zones of the USA and Australia with expanded observations up to 2016. The validation procedure for ETo estimations of the short reference crop showed that the P-T and H-S methods with the new revised coefficients outperformed the standard methods reducing the estimated root mean square error (RMSE) in ETo values by 40 and 25 %, respectively. The estimations of Rs using the H-S formula with revised coefficients reduced the RMSE by 28 % in comparison to the standard H-S formula. Finally, a raster database was built consisting of (a) global maps for the mean monthly ETo values estimated by ASCE-standardized method for both reference crops, (b) global maps for the revised annual coefficients of the P-T and H-S evapotranspiration methods for both reference crops and a global map for the revised annual coefficient of the H-S radiation formula and (c) global maps that indicate the optimum locations for using the standard P-T and H-S methods and their possible annual errors based on reference values. The database can support estimations of ETo and solar radiation for locations where climatic data are limited and it can support studies which require such estimations on larger scales (e.g. country, continent, world). The datasets produced in this study are archived in the PANGAEA database (https://doi.org/10.1594/PANGAEA.868808) and in the ESRN database (http://www.esrn-database.org or http://esrn-database.weebly.com).

  10. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  12. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  13. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    PubMed

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote Food Photography Method under controlled laboratory conditions, indicating that the Remote Food Photography Method accurately estimated infant powdered formula. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  14. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  15. Smartphone Assessment of Knee Flexion Compared to Radiographic Standards

    PubMed Central

    Dietz, Matthew J.; Sprando, Daniel; Hanselman, Andrew E.; Regier, Michael D.; Frye, Benjamin M.

    2017-01-01

    Purpose Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Methods Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. Results The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC = 0.94; 95% CI: 0.91–0.96). Visual estimation was found to be the least reliable method of measurement. Conclusions The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. PMID:28179062

  16. Sample sizes needed for specified margins of relative error in the estimates of the repeatability and reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2005-01-01

    Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.

  17. Kappa statistic for clustered dichotomous responses from physicians and patients.

    PubMed

    Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L; Cai, Jianwen

    2013-09-20

    The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared with the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. We present an example of an application to a coronary heart disease prevention study. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  19. Technical Note: Comparison of accelerated methods for evaluating leaching from preservative-treated wood

    Treesearch

    Stan T. Lebow; Patricia K. Lebow; Kolby C. Hirth

    2017-01-01

    Current standardized methods are not well-suited for estimating in-service preservative leaching from treated wood products. This study compared several alternative leaching methods to a commonly used standard method, and to leaching under natural exposure conditions. Small blocks or lumber specimens were pressure treated with a wood preservative containing borax and...

  20. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  1. An examination of effect estimation in factorial and standardly-tailored designs

    PubMed Central

    Allore, Heather G; Murphy, Terrence E

    2012-01-01

    Background Many clinical trials are designed to test an intervention arm against a control arm wherein all subjects are equally eligible for all interventional components. Factorial designs have extended this to test multiple intervention components and their interactions. A newer design referred to as a ‘standardly-tailored’ design, is a multicomponent interventional trial that applies individual interventional components to modify risk factors identified a priori and tests whether health outcomes differ between treatment arms. Standardly-tailored designs do not require that all subjects be eligible for every interventional component. Although standardly-tailored designs yield an estimate for the net effect of the multicomponent intervention, it has not yet been shown if they permit separate, unbiased estimation of individual component effects. The ability to estimate the most potent interventional components has direct bearing on conducting second stage translational research. Purpose We present statistical issues related to the estimation of individual component effects in trials of geriatric conditions using factorial and standardly-tailored designs. The medical community is interested in second stage translational research involving the transfer of results from a randomized clinical trial to a community setting. Before such research is undertaken, main effects and synergistic and or antagonistic interactions between them should be identified. Knowledge of the relative strength and direction of the effects of the individual components and their interactions facilitates the successful transfer of clinically significant findings and may potentially reduce the number of interventional components needed. Therefore the current inability of the standardly-tailored design to provide unbiased estimates of individual interventional components is a serious limitation in their applicability to second stage translational research. Methods We discuss estimation of individual component effects from the family of factorial designs and this limitation for standardly-tailored designs. We use the phrase ‘factorial designs’ to describe full-factorial designs and their derivatives including the fractional factorial, partial factorial, incomplete factorial and modified reciprocal designs. We suggest two potential directions for designing multicomponent interventions to facilitate unbiased estimates of individual interventional components. Results Full factorial designs and their variants are the most common multicomponent trial design described in the literature and differ meaningfully from standardly-tailored designs. Factorial and standardly-tailored designs result in similar estimates of net effect with different levels of precision. Unbiased estimation of individual component effects from a standardly-tailored design will require new methodology. Limitations Although clinically relevant in geriatrics, previous applications of standardly-tailored designs have not provided unbiased estimates of the effects of individual interventional components. Discussion Future directions to estimate individual component effects from standardly-tailored designs include applying D-optimal designs and creating independent linear combinations of risk factors analogous to factor analysis. Conclusion Methods are needed to extract unbiased estimates of the effects of individual interventional components from standardly-tailored designs. PMID:18375650

  2. Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan T.

    2012-01-01

    Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…

  3. Cost-effectiveness of a motivational intervention for alcohol-involved youth in a hospital emergency department.

    PubMed

    Neighbors, Charles J; Barnett, Nancy P; Rohsenow, Damaris J; Colby, Suzanne M; Monti, Peter M

    2010-05-01

    Brief interventions in the emergency department targeting risk-taking youth show promise to reduce alcohol-related injury. This study models the cost-effectiveness of a motivational interviewing-based intervention relative to brief advice to stop alcohol-related risk behaviors (standard care). Average cost-effectiveness ratios were compared between conditions. In addition, a cost-utility analysis examined the incremental cost of motivational interviewing per quality-adjusted life year gained. Microcosting methods were used to estimate marginal costs of motivational interviewing and standard care as well as two methods of patient screening: standard emergency-department staff questioning and proactive outreach by counseling staff. Average cost-effectiveness ratios were computed for drinking and driving, injuries, vehicular citations, and negative social consequences. Using estimates of the marginal effect of motivational interviewing in reducing drinking and driving, estimates of traffic fatality risk from drinking-and-driving youth, and national life tables, the societal costs per quality-adjusted life year saved by motivational interviewing relative to standard care were also estimated. Alcohol-attributable traffic fatality risks were estimated using national databases. Intervention costs per participant were $81 for standard care, $170 for motivational interviewing with standard screening, and $173 for motivational interviewing with proactive screening. The cost-effectiveness ratios for motivational interviewing were more favorable than standard care across all study outcomes and better for men than women. The societal cost per quality-adjusted life year of motivational interviewing was $8,795. Sensitivity analyses indicated that results were robust in terms of variability in parameter estimates. This brief intervention represents a good societal investment compared with other commonly adopted medical interventions.

  4. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    NASA Astrophysics Data System (ADS)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  5. A proposed method to estimate premorbid full scale intelligence quotient (FSIQ) for the Canadian Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) using demographic and combined estimation procedures.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H

    2007-11-01

    Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.

  6. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  7. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  8. Several key issues on using 137Cs method for soil erosion estimation

    USDA-ARS?s Scientific Manuscript database

    This work was to examine several key issues of using the cesium-137 method to estimate soil erosion rates in order to improve and standardize the method. Based on the comprehensive review and synthesis of a large body of published literature and the author’s extensive research experience, several k...

  9. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  10. A proposed standard methodology for estimating the wounding capacity of small calibre projectiles or other missiles.

    PubMed

    Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T

    1982-01-01

    A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.

  11. Techniques and methods for estimating abundance of larval and metamorphosed sea lampreys in Great Lakes tributaries, 1995 to 2001

    USGS Publications Warehouse

    Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.

    2003-01-01

    Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.

  12. Method for estimating effects of unknown correlations in spectral irradiance data on uncertainties of spectrally integrated colorimetric quantities

    NASA Astrophysics Data System (ADS)

    Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki

    2017-08-01

    Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.

  13. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  14. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    ERIC Educational Resources Information Center

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  15. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  16. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  17. Ocular and Densimeter Estimates of Understory Foliar Cover in Forests of Alabama

    Treesearch

    Thomas W. Popham; Roger L. Baker

    1987-01-01

    Foliar cover estimates of woody and herbaceous understory vegetation were done on twenty l-m2 plots for a variety of forest types in Alabama. The methods of estimation were ocular, loop-densimeter assisted ocular, and point frame. The point frame was used as the standard and the other two methods were compared using chi-square. Some ocular...

  18. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CALCULATING INGESTION EXPOSURE ESTIMATING INGESTION EXPOSURE, THE INDIRECT METHOD OF EXPOSURE ESTIMATION (IIT-A-7.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures undertaken for calculating ingestion exposure using the indirect method of exposure estimation. This SOP uses This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University ...

  19. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  20. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    ERIC Educational Resources Information Center

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2004-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…

  1. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    PubMed

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  2. Estimating premorbid general cognitive functioning for children and adolescents using the American Wechsler Intelligence Scale for Children-Fourth Edition: demographic and current performance approaches.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Brickell, Tracey A; Saklofske, Donald H

    2007-04-01

    Neuropsychologic evaluation requires current test performance be contrasted against a comparison standard to determine if change has occurred. An estimate of premorbid intelligence quotient (IQ) is often used as a comparison standard. The Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is a commonly used intelligence test. However, there is no method to estimate premorbid IQ for the WISC-IV, limiting the test's utility for neuropsychologic assessment. This study develops algorithms to estimate premorbid Full Scale IQ scores. Participants were the American WISC-IV standardization sample (N = 2172). The sample was randomly divided into 2 groups (development and validation). The development group was used to generate 12 algorithms. These algorithms were accurate predictors of WISC-IV Full Scale IQ scores in healthy children and adolescents. These algorithms hold promise as a method to predict premorbid IQ for patients with known or suspected neurologic dysfunction; however, clinical validation is required.

  3. Estimating the Pollution Risk of Cadmium in Soil Using a Composite Soil Environmental Quality Standard

    PubMed Central

    Huang, Biao; Zhao, Yongcun

    2014-01-01

    Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364

  4. Developing of method for primary frequency control droop and deadband actual values estimation

    NASA Astrophysics Data System (ADS)

    Nikiforov, A. A.; Chaplin, A. G.

    2017-11-01

    Operation of thermal power plant generation equipment, which participates in standardized primary frequency control (SPFC), must meet specific requirements. These requirements are formalized as nine algorithmic criteria, which are used for automatic monitoring of power plant participation in SPFC. One of these criteria - primary frequency control droop and deadband actual values estimation is considered in detail in this report. Experience shows that existing estimation method sometimes doesn’t work properly. Author offers alternative method, which allows estimating droop and deadband actual values more accurately. This method was implemented as a software application.

  5. A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.

    PubMed

    Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R

    2017-07-01

    The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. On the Methods for Estimating the Corneoscleral Limbus.

    PubMed

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  7. Comparison of Field Methods and Models to Estimate Mean Crown Diameter

    Treesearch

    William A. Bechtold; Manfred E. Mielke; Stanley J. Zarnoch

    2002-01-01

    The direct measurement of crown diameters with logger's tapes adds significantly to the cost of extensive forest inventories. We undertook a study of 100 trees to compare this measurement method to four alternatives-two field instruments, ocular estimates, and regression models. Using the taping method as the standard of comparison, accuracy of the tested...

  8. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less

  9. Estimating missing daily temperature extremes in Jaffna, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Thevakaran, A.; Sonnadara, D. U. J.

    2018-04-01

    The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.

  10. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  12. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  13. Use of Bayesian Inference in Crystallographic Structure Refinement via Full Diffraction Profile Analysis

    PubMed Central

    Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.

    2016-01-01

    A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221

  14. Smartphone assessment of knee flexion compared to radiographic standards.

    PubMed

    Dietz, Matthew J; Sprando, Daniel; Hanselman, Andrew E; Regier, Michael D; Frye, Benjamin M

    2017-03-01

    Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC=0.94; 95% CI: 0.91-0.96). Visual estimation was found to be the least reliable method of measurement. The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Simultaneous Validation of Seven Physical Activity Questionnaires Used in Japanese Cohorts for Estimating Energy Expenditure: A Doubly Labeled Water Study.

    PubMed

    Sasai, Hiroyuki; Nakata, Yoshio; Murakami, Haruka; Kawakami, Ryoko; Nakae, Satoshi; Tanaka, Shigeho; Ishikawa-Takata, Kazuko; Yamada, Yosuke; Miyachi, Motohiko

    2018-04-28

    Physical activity questionnaires (PAQs) used in large-scale Japanese cohorts have rarely been simultaneously validated against the gold standard doubly labeled water (DLW) method. This study examined the validity of seven PAQs used in Japan for estimating energy expenditure against the DLW method. Twenty healthy Japanese adults (9 men; mean age, 32.4 [standard deviation {SD}, 9.4] years, mainly researchers and students) participated in this study. Fifteen-day daily total energy expenditure (TEE) and basal metabolic rate (BMR) were measured using the DLW method and a metabolic chamber, respectively. Activity energy expenditure (AEE) was calculated as TEE - BMR - 0.1 × TEE. Seven PAQs were self-administered to estimate TEE and AEE. The mean measured values of TEE and AEE were 2,294 (SD, 318) kcal/day and 721 (SD, 161) kcal/day, respectively. All of the PAQs indicated moderate-to-strong correlations with the DLW method in TEE (rho = 0.57-0.84). Two PAQs (Japan Public Health Center Study [JPHC]-PAQ Short and JPHC-PAQ Long) showed significant equivalence in TEE and moderate intra-class correlation coefficients (ICC). None of the PAQs showed significantly equivalent AEE estimates, with differences ranging from -547 to 77 kcal/day. Correlations and ICCs in AEE were mostly weak or fair (rho = 0.02-0.54, and ICC = 0.00-0.44). Only JPHC-PAQ Short provided significant and fair agreement with the DLW method. TEE estimated by the PAQs showed moderate or strong correlations with the results of DLW. Two PAQs showed equivalent TEE and moderate agreement. None of the PAQs showed equivalent AEE estimation to the gold standard, with weak-to-fair correlations and agreements. Further studies with larger sample sizes are needed to confirm these findings.

  16. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    PubMed

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  17. Relationship and Variation of qPCR and Culturable Enterococci Estimates in Ambient Surface Waters Are Predictable

    EPA Science Inventory

    The quantitative polymerase chain reaction (qPCR) method provides rapid estimates of fecal indicator bacteria densities that have been indicated to be useful in the assessment of water quality. Primarily because this method provides faster results than standard culture-based meth...

  18. A Comparison of Approaches for Setting Proficiency Standards.

    ERIC Educational Resources Information Center

    Koffler, Stephen L.

    This research compared the cut-off scores estimated from an empirical procedure (Contrasting group method) to those determined from a more theoretical process (Nedelsky method). A methodological and statistical framework was also provided for analysis of the data to obtain the most appropriate standard using the empirical procedure. Data were…

  19. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  20. Location Modification Factors for Potential Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Sandra F.; Barnett, J. Matthew

    2017-01-01

    A Department of Energy facility must comply with the National Emission Standard for Hazardous Air Pollutants for radioactive air emissions. The standard is an effective dose of less than 0.1 mSv yr-1 to the maximum public receptor. Additionally, a lower dose level may be assigned to a specific emission point in a State issued permit. A method to efficiently estimate the expected dose for future emissions is described. This method is most appropriately applied to a research facility with several emission points with generally low emission levels of numerous isotopes.

  1. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  2. Multilevel Modeling with Correlated Effects

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Frees, Edward W.

    2007-01-01

    When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…

  3. ROOT BIOMASS ALLOCATION IN THE WORLD'S UPLAND FORESTS

    EPA Science Inventory

    Because the world's forests play a major role in regulating nutrient and carbon cycles, there is much interest in estimating their biomass. Estimates of aboveground biomass based on well-established methods are relatively abundant; estimates of root biomass based on standard meth...

  4. Modification of the Microtiter Reading Mirror for Use in the Standardized Micro Complement Fixation Test

    PubMed Central

    Hui, Gabriel W. K.

    1971-01-01

    Modification of the Microtiter reading mirror used in the standardized diagnostic complement fixation method permits convenient estimation of the results in per cent hemolysis by direct visual comparison with the hemolytic standards. Images PMID:5564678

  5. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  6. Phased-array vector velocity estimation using transverse oscillations.

    PubMed

    Pihl, Michael J; Marcher, Jonne; Jensen, Jorgen A

    2012-12-01

    A method for estimating the 2-D vector velocity of blood using a phased-array transducer is presented. The approach is based on the transverse oscillation (TO) method. The purposes of this work are to expand the TO method to a phased-array geometry and to broaden the potential clinical applicability of the method. A phased-array transducer has a smaller footprint and a larger field of view than a linear array, and is therefore more suited for, e.g., cardiac imaging. The method relies on suitable TO fields, and a beamforming strategy employing diverging TO beams is proposed. The implementation of the TO method using a phased-array transducer for vector velocity estimation is evaluated through simulation and flow-rig measurements are acquired using an experimental scanner. The vast number of calculations needed to perform flow simulations makes the optimization of the TO fields a cumbersome process. Therefore, three performance metrics are proposed. They are calculated based on the complex TO spectrum of the combined TO fields. It is hypothesized that the performance metrics are related to the performance of the velocity estimates. The simulations show that the squared correlation values range from 0.79 to 0.92, indicating a correlation between the performance metrics of the TO spectrum and the velocity estimates. Because these performance metrics are much more readily computed, the TO fields can be optimized faster for improved velocity estimation of both simulations and measurements. For simulations of a parabolic flow at a depth of 10 cm, a relative (to the peak velocity) bias and standard deviation of 4% and 8%, respectively, are obtained. Overall, the simulations show that the TO method implemented on a phased-array transducer is robust with relative standard deviations around 10% in most cases. The flow-rig measurements show similar results. At a depth of 9.5 cm using 32 emissions per estimate, the relative standard deviation is 9% and the relative bias is -9%. At the center of the vessel, the velocity magnitude is estimated to be 0.25 ± 0.023 m/s, compared with an expected peak velocity magnitude of 0.25 m/s, and the beam-to-flow angle is calculated to be 89.3° ± 0.77°, compared with an expected angle value between 89° and 90°. For steering angles up to ±20° degrees, the relative standard deviation is less than 20%. The results also show that a 64-element transducer implementation is feasible, but with a poorer performance compared with a 128-element transducer. The simulation and experimental results demonstrate that the TO method is suitable for use in conjunction with a phased-array transducer, and that 2-D vector velocity estimation is possible down to a depth of 15 cm.

  7. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  8. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  9. A novel method for blood volume estimation using trivalent chromium in rabbit models.

    PubMed

    Baby, Prathap Moothamadathil; Kumar, Pramod; Kumar, Rajesh; Jacob, Sanu S; Rawat, Dinesh; Binu, V S; Karun, Kalesh M

    2014-05-01

    Blood volume measurement though important in management of critically ill-patients is not routinely estimated in clinical practice owing to labour intensive, intricate and time consuming nature of existing methods. The aim was to compare blood volume estimations using trivalent chromium [(51)Cr(III)] and standard Evans blue dye (EBD) method in New Zealand white rabbit models and establish correction-factor (CF). Blood volume estimation in 33 rabbits was carried out using EBD method and concentration determined using spectrophotometric assay followed by blood volume estimation using direct injection of (51)Cr(III). Twenty out of 33 rabbits were used to find CF by dividing blood volume estimation using EBD with blood volume estimation using (51)Cr(III). CF is validated in 13 rabbits by multiplying it with blood volume estimation values obtained using (51)Cr(III). The mean circulating blood volume of 33 rabbits using EBD was 142.02 ± 22.77 ml or 65.76 ± 9.31 ml/kg and using (51)Cr(III) was estimated to be 195.66 ± 47.30 ml or 89.81 ± 17.88 ml/kg. The CF was found to be 0.77. The mean blood volume of 13 rabbits measured using EBD was 139.54 ± 27.19 ml or 66.33 ± 8.26 ml/kg and using (51)Cr(III) with CF was 152.73 ± 46.25 ml or 71.87 ± 13.81 ml/kg (P = 0.11). The estimation of blood volume using (51)Cr(III) was comparable to standard EBD method using CF. With further research in this direction, we envisage human blood volume estimation using (51)Cr(III) to find its application in acute clinical settings.

  10. An Experimental Study of the Effect of Judges' Knowledge of Item Data on Two Forms of the Angoff Standard Setting Method.

    ERIC Educational Resources Information Center

    Garrido, Mariquita; Payne, David A.

    Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…

  11. The Impact of Using Different Methods to Assess Completeness of 24-Hour Urine Collection on Estimating Dietary Sodium.

    PubMed

    Wielgosz, Andreas; Robinson, Christopher; Mao, Yang; Jiang, Ying; Campbell, Norm R C; Muthuri, Stella; Morrison, Howard

    2016-06-01

    The standard for population-based surveillance of dietary sodium intake is 24-hour urine testing; however, this may be affected by incomplete urine collection. The impact of different indirect methods of assessing completeness of collection on estimated sodium ingestion has not been established. The authors enlisted 507 participants from an existing community study in 2009 to collect 24-hour urine samples. Several methods of assessing completeness of urine collection were tested. Mean sodium intake varied between 3648 mg/24 h and 7210 mg/24 h depending on the method used. Excluding urine samples collected for longer or shorter than 24 hours increased the estimated urine sodium excretion, even when corrections for the variation in timed collections were applied. Until an accurate method of indirectly assessing completeness of urine collection is identified, the gold standard of administering para-aminobenzoic acid is recommended. Efforts to ensure participants collect complete urine samples are also warranted. ©2015 Wiley Periodicals, Inc.

  12. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  13. Ar+ and CuBr laser-assisted chemical bleaching of teeth: estimation of whiteness degree

    NASA Astrophysics Data System (ADS)

    Dimitrov, S.; Todorovska, Roumyana; Gizbrecht, Alexander I.; Raychev, L.; Petrov, Lyubomir P.

    2003-11-01

    In this work the results of adaptation of impartial methods for color determination aimed at developing of techniques for estimation of human teeth whiteness degree, sufficiently handy for common use in clinical practice are presented. For approbation and by the way of illustration of the techniques, standards of teeth colors were used as well as model and naturally discolored human teeth treated by two bleaching chemical compositions activated by three light sources each: Ar+ and CuBr lasers, and a standard halogen photopolymerization lamp. Typical reflection and fluorescence spectra of some samples are presented; the samples colors were estimated by a standard computer processing in RGB and B coordinates. The results of the applied spectral and colorimetric techniques are in a good agreement with those of the standard computer processing of the corresponding digital photographs and complies with the visually estimated degree of the teeth whiteness judged according to the standard reference scale commonly used in the aesthetic dentistry.

  14. HPLC analysis and standardization of Brahmi vati - An Ayurvedic poly-herbal formulation.

    PubMed

    Mishra, Amrita; Mishra, Arun K; Tiwari, Om Prakash; Jha, Shivesh

    2013-09-01

    The aim of the present study was to standardize Brahmi vati (BV) by simultaneous quantitative estimation of Bacoside A3 and Piperine adopting HPLC-UV method. BV very important Ayurvedic polyherbo formulation used to treat epilepsy and mental disorders containing thirty eight ingredients including Bacopa monnieri L. and Piper longum L. An HPLC-UV method was developed for the standardization of BV in light of simultaneous quantitative estimation of Bacoside A3 and Piperine, the major constituents of B. monnieri L. and P. longum L. respectively. The developed method was validated on parameters including linearity, precision, accuracy and robustness. The HPLC analysis showed significant increase in amount of Bacoside A3 and Piperine in the in-house sample of BV when compared with all three different marketed samples of the same. Results showed variations in the amount of Bacoside A3 and Piperine in different samples which indicate non-uniformity in their quality which will lead to difference in their therapeutic effects. The outcome of the present investigation underlines the importance of standardization of Ayurvedic formulations. The developed method may be further used to standardize other samples of BV or other formulations containing Bacoside A3 and Piperine.

  15. Development of modern human subadult age and sex estimation standards using multi-slice computed tomography images from medical examiner's offices

    NASA Astrophysics Data System (ADS)

    Stock, Michala K.; Stull, Kyra E.; Garvin, Heather M.; Klales, Alexandra R.

    2016-10-01

    Forensic anthropologists are routinely asked to estimate a biological profile (i.e., age, sex, ancestry and stature) from a set of unidentified remains. In contrast to the abundance of collections and techniques associated with adult skeletons, there is a paucity of modern, documented subadult skeletal material, which limits the creation and validation of appropriate forensic standards. Many are forced to use antiquated methods derived from small sample sizes, which given documented secular changes in the growth and development of children, are not appropriate for application in the medico-legal setting. Therefore, the aim of this project is to use multi-slice computed tomography (MSCT) data from a large, diverse sample of modern subadults to develop new methods to estimate subadult age and sex for practical forensic applications. The research sample will consist of over 1,500 full-body MSCT scans of modern subadult individuals (aged birth to 20 years) obtained from two U.S. medical examiner's offices. Statistical analysis of epiphyseal union scores, long bone osteometrics, and os coxae landmark data will be used to develop modern subadult age and sex estimation standards. This project will result in a database of information gathered from the MSCT scans, as well as the creation of modern, statistically rigorous standards for skeletal age and sex estimation in subadults. Furthermore, the research and methods developed in this project will be applicable to dry bone specimens, MSCT scans, and radiographic images, thus providing both tools and continued access to data for forensic practitioners in a variety of settings.

  16. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  17. A method for estimating mean and low flows of streams in national forests of Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1985-01-01

    Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)

  18. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate...

  19. Comparison and validation of methods to quantify Cry1Ab toxin from Bacillus thuringiensis for standardization of insect bioassays.

    PubMed

    Crespo, André L B; Spencer, Terence A; Nekl, Emily; Pusztai-Carey, Marianne; Moar, William J; Siegfried, Blair D

    2008-01-01

    Standardization of toxin preparations derived from Bacillus thuringiensis (Berliner) used in laboratory bioassays is critical for accurately assessing possible changes in the susceptibility of field populations of target pests. Different methods were evaluated to quantify Cry1Ab, the toxin expressed by 80% of the commercially available transgenic maize that targets the European corn borer, Ostrinia nubilalis (Hübner). We compared three methods of quantification on three different toxin preparations from independent sources: enzyme-linked immunosorbent assay (ELISA), sodium dodecyl sulfate-polyacrylamide gel electrophoresis and densitometry (SDS-PAGE/densitometry), and the Bradford assay for total protein. The results were compared to those obtained by immunoblot analysis and with the results of toxin bioassays against susceptible laboratory colonies of O. nubilalis. The Bradford method resulted in statistically higher estimates than either ELISA or SDS-PAGE/densitometry but also provided the lowest coefficients of variation (CVs) for estimates of the Cry1Ab concentration (from 2.4 to 5.4%). The CV of estimates obtained by ELISA ranged from 12.8 to 26.5%, whereas the CV of estimates obtained by SDS-PAGE/densitometry ranged from 0.2 to 15.4%. We standardized toxin concentration by using SDS-PAGE/densitometry, which is the only method specific for the 65-kDa Cry1Ab protein and is not confounded by impurities detected by ELISA and Bradford assay for total protein. Bioassays with standardized Cry1Ab preparations based on SDS-PAGE/densitometry showed no significant differences in LC(50) values, although there were significant differences in growth inhibition for two of the three Cry1Ab preparations. However, the variation in larval weight caused by toxin source was only 4% of the total variation, and we conclude that standardization of Cry1Ab production and quantification by SDS-PAGE/densitometry may improve data consistency in monitoring efforts to identify changes in insect susceptibility to Cry1Ab.

  20. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  1. Feasibility of Measuring Mean Vertical Motion for Estimating Advection. Chapter 6

    NASA Technical Reports Server (NTRS)

    Vickers, Dean; Mahrt, L.

    2005-01-01

    Numerous recent studies calculate horizontal and vertical advection terms for budget studies of net ecosystem exchange of carbon. One potential uncertainty in such studies is the estimate of mean vertical motion. This work addresses the reliability of vertical advection estimates by contrasting the vertical motion obtained from the standard practise of measuring the vertical velocity and applying a tilt correction, to the vertical motion calculated from measurements of the horizontal divergence of the flow using a network of towers. Results are compared for three different tilt correction methods. Estimates of mean vertical motion are sensitive to the choice of tilt correction method. The short-term mean (10 to 60 minutes) vertical motion based on the horizontal divergence is more realistic compared to the estimates derived from the standard practise. The divergence shows long-term mean (days to months) sinking motion at the site, apparently due to the surface roughness change. Because all the tilt correction methods rely on the assumption that the long-term mean vertical motion is zero for a given wind direction, they fail to reproduce the vertical motion based on the divergence.

  2. SIMPLE METHOD FOR ESTIMATING POLYCHLORINATED BIPHENYL CONCENTRATIONS ON SOILS AND SEDIMENTS USING SUBCRITICAL WATER EXTRACTION COUPLED WITH SOLID-PHASE MICROEXTRACTION. (R825368)

    EPA Science Inventory

    A rapid method for estimating polychlorinated biphenyl (PCB) concentrations in contaminated soils and sediments has been developed by coupling static subcritical water extraction with solid-phase microextraction (SPME). Soil, water, and internal standards are placed in a seale...

  3. Quality assurance, training, and certification in ozone air pollution studies

    Treesearch

    Susan Schilling; Paul Miller; Brent Takemoto

    1996-01-01

    Uniform, or standard, measurement methods of data are critical to projects monitoring change to forest systems. Standardized methods, with known or estimable errors, contribute greatly to the confidence associated with decisions on the basis of field data collections (Zedaker and Nicholas 1990). Quality assurance (QA) for the measurement process includes operations and...

  4. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  5. Counting glomeruli and podocytes: rationale and methodologies

    PubMed Central

    Puelles, Victor G.; Bertram, John F.

    2015-01-01

    Purpose of review There is currently much interest in the numbers of both glomeruli and podocytes. This interest stems from greater understanding of the effects of suboptimal fetal events on nephron endowment, the associations between low nephron number and chronic cardiovascular and kidney disease in adults, and the emergence of the podocyte depletion hypothesis. Recent findings Obtaining accurate and precise estimates of glomerular and podocyte number has proven surprisingly difficult. When whole kidneys or large tissue samples are available, design-based stereological methods are considered gold-standard because they are based on principles that negate systematic bias. However, these methods are often tedious and time-consuming, and oftentimes inapplicable when dealing with small samples such as biopsies. Therefore, novel methods suitable for small tissue samples, and innovative approaches to facilitate high through put measurements, such as magnetic resonance imaging (MRI) to estimate glomerular number and flow cytometry to estimate podocyte number, have recently been described. Summary This review describes current gold-standard methods for estimating glomerular and podocyte number, as well as methods developed in the past 3 years. We are now better placed than ever before to accurately and precisely estimate glomerular and podocyte number, and to examine relationships between these measurements and kidney health and disease. PMID:25887899

  6. HPLC analysis and standardization of Brahmi vati – An Ayurvedic poly-herbal formulation

    PubMed Central

    Mishra, Amrita; Mishra, Arun K.; Tiwari, Om Prakash; Jha, Shivesh

    2013-01-01

    Objectives The aim of the present study was to standardize Brahmi vati (BV) by simultaneous quantitative estimation of Bacoside A3 and Piperine adopting HPLC–UV method. BV very important Ayurvedic polyherbo formulation used to treat epilepsy and mental disorders containing thirty eight ingredients including Bacopa monnieri L. and Piper longum L. Materials and methods An HPLC–UV method was developed for the standardization of BV in light of simultaneous quantitative estimation of Bacoside A3 and Piperine, the major constituents of B. monnieri L. and P. longum L. respectively. The developed method was validated on parameters including linearity, precision, accuracy and robustness. Results The HPLC analysis showed significant increase in amount of Bacoside A3 and Piperine in the in-house sample of BV when compared with all three different marketed samples of the same. Results showed variations in the amount of Bacoside A3 and Piperine in different samples which indicate non-uniformity in their quality which will lead to difference in their therapeutic effects. Conclusion The outcome of the present investigation underlines the importance of standardization of Ayurvedic formulations. The developed method may be further used to standardize other samples of BV or other formulations containing Bacoside A3 and Piperine. PMID:24396246

  7. Thermoelectric properties of bismuth telluride nanoplate thin films determined using combined infrared spectroscopy and first-principles calculation

    NASA Astrophysics Data System (ADS)

    Wada, Kodai; Tomita, Koji; Takashiri, Masayuki

    2018-06-01

    The thermoelectric properties of bismuth telluride (Bi2Te3) nanoplate thin films were estimated using combined infrared spectroscopy and first-principles calculation, followed by comparing the estimated properties with those obtained using the standard electrical probing method. Hexagonal single-crystalline Bi2Te3 nanoplates were first prepared using solvothermal synthesis, followed by preparing Bi2Te3 nanoplate thin films using the drop-casting technique. The nanoplates were joined by thermally annealing them at 250 °C in Ar (95%)–H2 (5%) gas (atmospheric pressure). The electronic transport properties were estimated by infrared spectroscopy using the Drude model, with the effective mass being determined from the band structure using first-principles calculations based on the density functional theory. The electrical conductivity and Seebeck coefficient obtained using the combined analysis were higher than those obtained using the standard electrical probing method, probably because the contact resistance between the nanoplates was excluded from the estimation procedure of the combined analysis method.

  8. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  9. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  10. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  11. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  12. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    PubMed

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  13. A standardized mean difference effect size for multiple baseline designs across individuals.

    PubMed

    Hedges, Larry V; Pustejovsky, James E; Shadish, William R

    2013-12-01

    Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the standardized mean difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)(k) design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Regional Epidemiologic Assessment of Prevalent Periodontitis Using an Electronic Health Record System

    PubMed Central

    Acharya, Amit; VanWormer, Jeffrey J.; Waring, Stephen C.; Miller, Aaron W.; Fuehrer, Jay T.; Nycz, Gregory R.

    2013-01-01

    An oral health surveillance platform that queries a clinical/administrative data warehouse was applied to estimate regional prevalence of periodontitis. Cross-sectional analysis of electronic health record data collected between January 1, 2006, and December 31, 2010, was undertaken in a population sample residing in Ladysmith, Wisconsin. Eligibility criteria included: 1) residence in defined zip codes, 2) age 25–64 years, and 3) ≥1 Marshfield dental clinic comprehensive examination. Prevalence was established using 2 independent methods: 1) via an algorithm that considered clinical attachment loss and probe depth and 2) via standardized Current Dental Terminology (CDT) codes related to periodontal treatment. Prevalence estimates were age-standardized to 2000 US Census estimates. Inclusion criteria were met by 2,056 persons. On the basis of the American Academy of Periodontology/Centers for Disease Control and Prevention method, the age-standardized prevalence of moderate or severe periodontitis (combined) was 407 per 1,000 males and 308 per 1,000 females (348/1,000 males and 269/1,000 females using the CDT code method). Increased prevalence and severity of periodontitis was noted with increasing age. Local prevalence of periodontitis was consistent with national estimates. The need to address potential sample selection bias in future electronic health record–based periodontitis research was identified by this approach. Methods outlined herein may be applied to refine oral health surveillance systems, inform dental epidemiologic methods, and evaluate interventional outcomes. PMID:23462966

  15. Weighing in on international growth standards: testing the case in Australian preschool children.

    PubMed

    Pattinson, C L; Staton, S L; Smith, S S; Trost, S G; Sawyer, E F; Thorpe, K J

    2017-10-01

    Overweight and obesity in preschool-aged children are major health concerns. Accurate and reliable estimates of prevalence are necessary to direct public health and clinical interventions. There are currently three international growth standards used to determine prevalence of overweight and obesity, each using different methodologies: Center for Disease Control (CDC), World Health Organization (WHO) and International Obesity Task Force (IOTF). Adoption and use of each method were examined through a systematic review of Australian population studies (2006-2017). For this period, systematically identified population studies (N = 20) reported prevalence of overweight and obesity ranging between 15 and 38% with most (n = 16) applying the IOTF standards. To demonstrate the differences in prevalence estimates yielded by the IOTF in comparison to the WHO and CDC standards, methods were applied to a sample of N = 1,926 Australian children, aged 3-5 years. As expected, the three standards yielded significantly different estimates when applied to this single population. Prevalence of overweight/obesity was WHO - 9.3%, IOTF - 21.7% and CDC - 33.1%. Judicious selection of growth standards, taking account of their underpinning methodologies and provisions of access to study data sets to allow prevalence comparisons, is recommended. © 2017 World Obesity Federation.

  16. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    PubMed Central

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism. PMID:25387603

  17. Data Combination and Instrumental Variables in Linear Models

    ERIC Educational Resources Information Center

    Khawand, Christopher

    2012-01-01

    Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…

  18. 78 FR 25627 - Energy Conservation Program for Certain Industrial Equipment: Energy Conservation Standards for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-02

    ...-fired furnaces, Underwriters Laboratories (UL) Standard 727-1994, ``Standard for Safety for Oil-Fired... supplementary method called a catalog teardown (or ``virtual teardown'') uses published manufacturer catalogs... similar products and in manufacturer literature and information, to estimate the costs using virtual...

  19. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    PubMed

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  20. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  1. Quantifying relative importance: Computing standardized effects in models with binary outcomes

    USGS Publications Warehouse

    Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.

    2018-01-01

    Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.

  2. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time.

    PubMed

    Martin, Corby K; Correa, John B; Han, Hongmei; Allen, H Raymond; Rood, Jennifer C; Champagne, Catherine M; Gunturk, Bahadir K; Bray, George A

    2012-04-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1's objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake (EI) with the Remote Food Photography Method (RFPM) over 6 days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, EI estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n = 24) or Customized Prompts (n = 16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating EI when Standard (mean ± s.d. = -895 ± 770 kcal/day, P < 0.0001), but not Customized Prompts (-270 ± 748 kcal/day, P = 0.22) were used. Error (EI from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM's ability to accurately estimate EI in free-living adults (N = 50) over 6 days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living EI (-152 ± 694 kcal/day, P = 0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake.

  3. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    PubMed Central

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2014-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199

  4. Population Size Estimation of Men Who Have Sex with Men in Ho Chi Minh City and Nghe An Using Social App Multiplier Method.

    PubMed

    Safarnejad, Ali; Nga, Nguyen Thien; Son, Vo Hai

    2017-06-01

    This study aims to estimate the number of men who have sex with men (MSM) in Ho Chi Minh City (HCMC) and Nghe An province, Viet Nam, using a novel method of population size estimation, and to assess the feasibility of the method in implementation. An innovative approach to population size estimation grounded on the principles of the multiplier method, and using social app technology and internet-based surveys was undertaken among MSM in two regions of Viet Nam in 2015. Enumeration of active users of popular social apps for MSM in Viet Nam was conducted over 4 weeks. Subsequently, an independent online survey was done using respondent driven sampling. We also conducted interviews with key informants in Nghe An and HCMC on their experience and perceptions of this method and other methods of size estimation. The population of MSM in Nghe An province was estimated to be 1765 [90% CI 1251-3150]. The population of MSM in HCMC was estimated to be 37,238 [90% CI 24,146-81,422]. These estimates correspond to 0.17% of the adult male population in Nghe An province [90% CI 0.12-0.30], and 1.35% of the adult male population in HCMC [90% CI 0.87-2.95]. Our size estimates of the MSM population (1.35% [90% CI 0.87%-2.95%] of the adult male population in HCMC) fall within current standard practice of estimating 1-3% of adult male population in big cities. Our size estimates of the MSM population (0.17% [90% CI 0.12-0.30] of the adult male population in Nghe An province) are lower than the current standard practice of estimating 0.5-1.5% of adult male population in rural provinces. These estimates can provide valuable information for sub-national level HIV prevention program planning and evaluation. Furthermore, we believe that our results help to improve application of this population size estimation method in other regions of Viet Nam.

  5. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump.

    PubMed

    Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K

    2017-03-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen's d >1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.

  6. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump

    PubMed Central

    Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K

    2016-01-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen’s d>1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900

  7. Quantifying stream nutrient uptake from ambient to saturation with instantaneous tracer additions

    NASA Astrophysics Data System (ADS)

    Covino, T. P.; McGlynn, B. L.; McNamara, R.

    2009-12-01

    Stream nutrient tracer additions and spiraling metrics are frequently used to quantify stream ecosystem behavior. However, standard approaches limit our understanding of aquatic biogeochemistry. Specifically, the relationship between in-stream nutrient concentration and stream nutrient spiraling has not been characterized. The standard constant rate (steady-state) approach to stream spiraling parameter estimation, either through elevating nutrient concentration or adding isotopically labeled tracers (e.g. 15N), provides little information regarding the stream kinetic curve that represents the uptake-concentration relationship analogous to the Michaelis-Menten curve. These standard approaches provide single or a few data points and often focus on estimating ambient uptake under the conditions at the time of the experiment. Here we outline and demonstrate a new method using instantaneous nutrient additions and dynamic analyses of breakthrough curve (BTC) data to characterize the full relationship between spiraling metrics and nutrient concentration. We compare the results from these dynamic analyses to BTC-integrated, and standard steady-state approaches. Our results indicate good agreement between these three approaches but we highlight the advantages of our dynamic method. Specifically, our new dynamic method provides a cost-effective and efficient approach to: 1) characterize full concentration-spiraling metric curves; 2) estimate ambient spiraling metrics; 3) estimate Michaelis-Menten parameters maximum uptake (Umax) and the half-saturation constant (Km) from developed uptake-concentration kinetic curves, and; 4) measure dynamic nutrient spiraling in larger rivers where steady-state approaches are impractical.

  8. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  9. Improved and standardized method for assessing years lived with disability after injury

    PubMed Central

    Polinder, S; Lyons, RA; Lund, J; Ditsuwan, V; Prinsloo, M; Veerman, JL; van Beeck, EF

    2012-01-01

    Abstract Objective To develop a standardized method for calculating years lived with disability (YLD) after injury. Methods The method developed consists of obtaining data on injury cases seen in emergency departments as well as injury-related hospital admissions, using the EUROCOST system to link the injury cases to disability information and employing empirical data to describe functional outcomes in injured patients. Findings Overall, 87 weights and proportions for 27 injury diagnoses involving lifelong consequences were included in the method. Almost all of the injuries investigated (96–100%) could be assigned to EUROCOST categories. The mean number of YLD per case of injury varied with the country studied. Use of the novel method resulted in estimated burdens of injury that were 3 to 8 times higher, in terms of YLD, than the corresponding estimates produced using the conventional methods employed in global burden of disease studies, which employ disability-adjusted life years. Conclusion The novel method for calculating YLD after injury can be applied in different settings, overcomes some limitations of the method used to calculate the global burden of disease, and allows more accurate estimates of the population burden of injury. PMID:22807597

  10. Comparison of two methods of standard setting: the performance of the three-level Angoff method.

    PubMed

    Jalili, Mohammad; Hejri, Sara M; Norcini, John J

    2011-12-01

    Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.

  11. Gestational age estimation on United States livebirth certificates: a historical overview.

    PubMed

    Wier, Megan L; Pearl, Michelle; Kharrazi, Martin

    2007-09-01

    Gestational age on the birth certificate is the most common source of population-based gestational age data that informs public health policy and practice in the US. Last menstrual period is one of the oldest methods of gestational age estimation and has been on the US Standard Certificate of Live Birth since 1968. The 'clinical estimate of gestation', added to the standard certificate in 1989 to address missing or erroneous last menstrual period data, was replaced by the 'obstetric estimate of gestation' on the 2003 revision, which specifically precludes neonatal assessments. We discuss the strengths and weaknesses of these measures, potential research implications and challenges accompanying the transition to the obstetric estimate.

  12. A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies.

    PubMed

    Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán

    2012-01-01

    The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Bayesian Estimation Supersedes the "t" Test

    ERIC Educational Resources Information Center

    Kruschke, John K.

    2013-01-01

    Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…

  14. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  15. The Effect of Various Factors on Standard Setting.

    ERIC Educational Resources Information Center

    Norcini, John J.; And Others

    1988-01-01

    Two studies of medical certification examinations were undertaken to assess standard setting using Angoff's Method. Results indicate that (1) specialization within broad content areas does not affect an expert's estimates of the performance of the borderline group; and (2) performance data should be provided during the standard-setting process.…

  16. Normalization of metabolomics data with applications to correlation maps.

    PubMed

    Jauhiainen, Alexandra; Madhu, Basetti; Narita, Masako; Narita, Masashi; Griffiths, John; Tavaré, Simon

    2014-08-01

    In metabolomics, the goal is to identify and measure the concentrations of different metabolites (small molecules) in a cell or a biological system. The metabolites form an important layer in the complex metabolic network, and the interactions between different metabolites are often of interest. It is crucial to perform proper normalization of metabolomics data, but current methods may not be applicable when estimating interactions in the form of correlations between metabolites. We propose a normalization approach based on a mixed model, with simultaneous estimation of a correlation matrix. We also investigate how the common use of a calibration standard in nuclear magnetic resonance (NMR) experiments affects the estimation of correlations. We show with both real and simulated data that our proposed normalization method is robust and has good performance when discovering true correlations between metabolites. The standardization of NMR data is shown in simulation studies to affect our ability to discover true correlations to a small extent. However, comparing standardized and non-standardized real data does not result in any large differences in correlation estimates. Source code is freely available at https://sourceforge.net/projects/metabnorm/ alexandra.jauhiainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  18. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  19. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  20. A Note on the Specification of Error Structures in Latent Interaction Models

    ERIC Educational Resources Information Center

    Mao, Xiulin; Harring, Jeffrey R.; Hancock, Gregory R.

    2015-01-01

    Latent interaction models have motivated a great deal of methodological research, mainly in the area of estimating such models. Product-indicator methods have been shown to be competitive with other methods of estimation in terms of parameter bias and standard error accuracy, and their continued popularity in empirical studies is due, in part, to…

  1. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  2. Updated techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada

    USGS Publications Warehouse

    Hess, Glen W.

    2002-01-01

    Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.

  3. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  4. Evaluation of a Wipe Surface Sample Method for Collection of Bacillus Spores from Nonporous Surfaces▿

    PubMed Central

    Brown, Gary S.; Betty, Rita G.; Brockmann, John E.; Lucero, Daniel A.; Souza, Caroline A.; Walsh, Kathryn S.; Boucher, Raymond M.; Tezak, Mathew; Wilson, Mollye C.; Rudolph, Todd

    2007-01-01

    Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of ±0.12, and for painted wallboard it was 0.29 with a standard deviation of ±0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of ±0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis. PMID:17122390

  5. Evaluation of a wipe surface sample method for collection of Bacillus spores from nonporous surfaces.

    PubMed

    Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Mathew; Wilson, Mollye C; Rudolph, Todd

    2007-02-01

    Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of +/-0.12, and for painted wallboard it was 0.29 with a standard deviation of +/-0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of +/-0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis.

  6. Estimation of two-dimensional motion velocity using ultrasonic signals beamformed in Cartesian coordinate for measurement of cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kaburaki, Kaori; Mozumi, Michiya; Hasegawa, Hideyuki

    2018-07-01

    Methods for the estimation of two-dimensional (2D) velocity and displacement of physiological tissues are necessary for quantitative diagnosis. In echocardiography with a phased array probe, the accuracy in the estimation of the lateral motion is lower than that of the axial motion. To improve the accuracy in the estimation of the lateral motion, in the present study, the coordinate system for ultrasonic beamforming was changed from the conventional polar coordinate to the Cartesian coordinate. In a basic experiment, the motion velocity of a phantom, which was moved at a constant speed, was estimated by the conventional and proposed methods. The proposed method reduced the bias error and standard deviation in the estimated motion velocities. In an in vivo measurement, intracardiac blood flow was analyzed by the proposed method.

  7. Interlaboratory evaluation of a standardized inductively coupled plasma mass spectrometry method for the determination of trace beryllium in air filter samples.

    PubMed

    Ashley, Kevin; Brisson, Michael J; Howe, Alan M; Bartley, David L

    2009-12-01

    A collaborative interlaboratory evaluation of a newly standardized inductively coupled plasma mass spectrometry (ICP-MS) method for determining trace beryllium in workplace air samples was carried out toward fulfillment of method validation requirements for ASTM International voluntary consensus standard test methods. The interlaboratory study (ILS) was performed in accordance with an applicable ASTM International standard practice, ASTM E691, which describes statistical procedures for investigating interlaboratory precision. Uncertainty was also estimated in accordance with ASTM D7440, which applies the International Organization for Standardization Guide to the Expression of Uncertainty in Measurement to air quality measurements. Performance evaluation materials (PEMs) used consisted of 37 mm diameter mixed cellulose ester filters that were spiked with beryllium at levels of 0.025 (low loading), 0.5 (medium loading), and 10 (high loading) microg Be/filter; these spiked filters were prepared by a contract laboratory. Participating laboratories were recruited from a pool of over 50 invitees; ultimately, 20 laboratories from Europe, North America, and Asia submitted ILS results. Triplicates of each PEM (blanks plus the three different loading levels) were conveyed to each volunteer laboratory, along with a copy of the draft standard test method that each participant was asked to follow; spiking levels were unknown to the participants. The laboratories were requested to prepare the PEMs by one of three sample preparation procedures (hotplate or microwave digestion or hotblock extraction) that were described in the draft standard. Participants were then asked to analyze aliquots of the prepared samples by ICP-MS and to report their data in units of mu g Be/filter sample. Interlaboratory precision estimates from participating laboratories, computed in accordance with ASTM E691, were 0.165, 0.108, and 0.151 (relative standard deviation) for the PEMs spiked at 0.025, 0.5, and 10 microg Be/filter, respectively. Overall recoveries were 93.2%, 102%, and 80.6% for the low, medium, and high beryllium loadings, respectively. Expanded uncertainty estimates for interlaboratory analysis of low, medium, and high beryllium loadings, calculated in accordance with ASTM D7440, were 18.8%, 19.8%, and 24.4%, respectively. These figures of merit support promulgation of the analytical procedure as an ASTM International standard test method, ASTM D7439.

  8. Estimating age at a specified length from the von Bertalanffy growth function

    USGS Publications Warehouse

    Ogle, Derek H.; Isermann, Daniel A.

    2017-01-01

    Estimating the time required (i.e., age) for fish in a population to reach a specific length (e.g., legal harvest length) is useful for understanding population dynamics and simulating the potential effects of length-based harvest regulations. The age at which a population reaches a specific mean length is typically estimated by fitting a von Bertalanffy growth function to length-at-age data and then rearranging the best-fit equation to solve for age at the specified length. This process precludes the use of standard frequentist methods to compute confidence intervals and compare estimates of age at the specified length among populations. We provide a parameterization of the von Bertalanffy growth function that has age at a specified length as a parameter. With this parameterization, age at a specified length is directly estimated, and standard methods can be used to construct confidence intervals and make among-group comparisons for this parameter. We demonstrate use of the new parameterization with two data sets.

  9. The Remote Food Photography Method accurately estimates dry powdered foods—the source of calories for many infants

    PubMed Central

    Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.

    2016-01-01

    Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under controlled laboratory conditions indicating that the RFPM accurately estimated infant powdered formula. PMID:26947889

  10. Signal location using generalized linear constraints

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.; Feldman, D. D.

    1992-01-01

    This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  13. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  14. Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.

    PubMed

    Arora, Naveen Kumar; Verma, Maya

    2017-12-01

    In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.

  15. Covariate selection with group lasso and doubly robust estimation of causal effects

    PubMed Central

    Koch, Brandon; Vock, David M.; Wolfson, Julian

    2017-01-01

    Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276

  16. Covariate selection with group lasso and doubly robust estimation of causal effects.

    PubMed

    Koch, Brandon; Vock, David M; Wolfson, Julian

    2018-03-01

    The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.

  17. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  18. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  19. In situ method for estimating cell survival in a solid tumor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfieri, A.A.; Hahn, E.W.

    1978-09-01

    The response of the murine Meth-A fibrosarcoma to single and fractionated doses of x-irradiation, actinomycin D chemotherapy, and/or concomitant local tumor hyperthermia was assayed with the use of an in situ method for estimating cell kill within a solid tumor. The cell survival assay was based on a standard curve plotting number of inoculated viable cells with and without radiation-inactivated homologous tumor cells versus the time required for i.m. tumors to grow to 1.0 cu cm. The time for post-treatment tumors to grow to 1.0 cu cm was cross-referenced to the standard curve, and the number of surviving cells contributingmore » to tumor regrowth was estimated. The resulting surviving fraction curves closely resemble those obtained with in vitro systems.« less

  20. [Noninvasive total hemoglobin monitoring based on multiwave spectrophotometry in obstetrics and gynecology].

    PubMed

    Pyregov, A V; Ovechkin, A Iu; Petrov, S V

    2012-01-01

    Results of prospective randomized comparative research of 2 total hemoglobin estimation methods are presented. There were laboratory tests and continuous noninvasive technique with multiwave spectrophotometry on the Masimo Rainbow SET. Research was carried out in two stages. At the 1st stage (gynecology)--67 patients were included and in second stage (obstetrics)--44 patients during and after Cesarean section. The standard deviation of noninvasive total hemoglobin estimation from absolute values (invasive) was 7.2 and 4.1%, an standard deviation in a sample--5.2 and 2.7 % in gynecologic operations and surgical delivery respectively, that confirms lack of reliable indicators differences. The method of continuous noninvasive total hemoglobin estimation with multiwave spectrophotometry on the Masimo Rainbow SET technology can be recommended for use in obstetrics and gynecology.

  1. a Comparison of Evaluations and Assessments Obtained Using Alternative Standards for Predicting the Hazards of Whole-Body Vibration and Repeated Shocks

    NASA Astrophysics Data System (ADS)

    Lewis, C. H.; Griffin, M. J.

    1998-08-01

    There are three current standards that might be used to assess the vibration and shock transmitted by a vehicle seat with respect to possible effects on human health: ISO 2631/1 (1985), BS 6841 (1987) and ISO 2631-1 (1997). Evaluations have been performed on the seat accelerations measured in nine different transport environments (bus, car, mobile crane, fork-lift truck, tank, ambulance, power boat, inflatable boat, mountain bike) in conditions that might be considered severe. For each environment, limiting daily exposure durations were estimated by comparing the frequency weighted root mean square (i.e., r.m.s.) accelerations and the vibration dose values (i.e.,VDV), calculated according to each standard with the relevant exposure limits, action level and health guidance caution zones. Very different estimates of the limiting daily exposure duration can be obtained using the methods described in the three standards. Differences were observed due to variations in the shapes of the frequency weightings, the phase responses of the frequency weighting filters, the method of combining multi-axis vibration, the averaging method, and the assessment method. With the evaluated motions, differences in the shapes of the weighting filters results in up to about 31% difference in r.m.s. acceleration between the “old” and the “new” ISO standard and up to about 14% difference between BS 6841 and the “new” ISO 2631. There were correspondingly greater differences in the estimates of safe daily exposure durations. With three of the more severe motions there was a difference of more than 250% between estimated safe daily exposure durations based on r.m.s. acceleration and those based on fourth power vibration dose values. The vibration dose values provided the more cautious assessments of the limiting daily exposure duration.

  2. Assessing operating characteristics of CAD algorithms in the absence of a gold standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.

    2010-04-15

    Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less

  3. The Demirjian versus the Willems method for dental age estimation in different populations: A meta-analysis of published studies

    PubMed Central

    2017-01-01

    Background The accuracy of radiographic methods for dental age estimation is important for biological growth research and forensic applications. Accuracy of the two most commonly used systems (Demirjian and Willems) has been evaluated with conflicting results. This study investigates the accuracies of these methods for dental age estimation in different populations. Methods A search of PubMed, Scopus, Ovid, Database of Open Access Journals and Google Scholar was undertaken. Eligible studies published before December 28, 2016 were reviewed and analyzed. Meta-analysis was performed on 28 published articles using the Demirjian and/or Willems methods to estimate chronological age in 14,109 children (6,581 males, 7,528 females) age 3–18 years in studies using Demirjian’s method and 10,832 children (5,176 males, 5,656 females) age 4–18 years in studies using Willems’ method. The weighted mean difference at 95% confidence interval was used to assess accuracies of the two methods in predicting the chronological age. Results The Demirjian method significantly overestimated chronological age (p<0.05) in males age 3–15 and females age 4–16 when studies were pooled by age cohorts and sex. The majority of studies using Willems’ method did not report significant overestimation of ages in either sex. Overall, Demirjian’s method significantly overestimated chronological age compared to the Willems method (p<0.05). The weighted mean difference for the Demirjian method was 0.62 for males and 0.72 for females, while that of the Willems method was 0.26 for males and 0.29 for females. Conclusion The Willems method provides more accurate estimation of chronological age in different populations, while Demirjian’s method has a broad application in terms of determining maturity scores. However, accuracy of Demirjian age estimations is confounded by population variation when converting maturity scores to dental ages. For highest accuracy of age estimation, population-specific standards, rather than a universal standard or methods developed on other populations, need to be employed. PMID:29117240

  4. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  6. A Mixed QM/MM Scoring Function to Predict Protein-Ligand Binding Affinity

    PubMed Central

    Hayik, Seth A.; Dunbrack, Roland; Merz, Kenneth M.

    2010-01-01

    Computational methods for predicting protein-ligand binding free energy continue to be popular as a potential cost-cutting method in the drug discovery process. However, accurate predictions are often difficult to make as estimates must be made for certain electronic and entropic terms in conventional force field based scoring functions. Mixed quantum mechanics/molecular mechanics (QM/MM) methods allow electronic effects for a small region of the protein to be calculated, treating the remaining atoms as a fixed charge background for the active site. Such a semi-empirical QM/MM scoring function has been implemented in AMBER using DivCon and tested on a set of 23 metalloprotein-ligand complexes, where QM/MM methods provide a particular advantage in the modeling of the metal ion. The binding affinity of this set of proteins can be calculated with an R2 of 0.64 and a standard deviation of 1.88 kcal/mol without fitting and 0.71 and a standard deviation of 1.69 kcal/mol with fitted weighting of the individual scoring terms. In this study we explore using various methods to calculate terms in the binding free energy equation, including entropy estimates and minimization standards. From these studies we found that using the rotational bond estimate to ligand entropy results in a reasonable R2 of 0.63 without fitting. We also found that using the ESCF energy of the proteins without minimization resulted in an R2 of 0.57, when using the rotatable bond entropy estimate. PMID:21221417

  7. Estimation of feline renal volume using computed tomography and ultrasound.

    PubMed

    Tyson, Reid; Logsdon, Stacy A; Werre, Stephen R; Daniel, Gregory B

    2013-01-01

    Renal volume estimation is an important parameter for clinical evaluation of kidneys and research applications. A time efficient, repeatable, and accurate method for volume estimation is required. The purpose of this study was to describe the accuracy of ultrasound and computed tomography (CT) for estimating feline renal volume. Standardized ultrasound and CT scans were acquired for kidneys of 12 cadaver cats, in situ. Ultrasound and CT multiplanar reconstructions were used to record renal length measurements that were then used to calculate volume using the prolate ellipsoid formula for volume estimation. In addition, CT studies were reconstructed at 1 mm, 5 mm, and 1 cm, and transferred to a workstation where the renal volume was calculated using the voxel count method (hand drawn regions of interest). The reference standard kidney volume was then determined ex vivo using water displacement with the Archimedes' principle. Ultrasound measurement of renal length accounted for approximately 87% of the variability in renal volume for the study population. The prolate ellipsoid formula exhibited proportional bias and underestimated renal volume by a median of 18.9%. Computed tomography volume estimates using the voxel count method with hand-traced regions of interest provided the most accurate results, with increasing accuracy for smaller voxel sizes in grossly normal kidneys (-10.1 to 0.6%). Findings from this study supported the use of CT and the voxel count method for estimating feline renal volume in future clinical and research studies. © 2012 Veterinary Radiology & Ultrasound.

  8. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  9. AN EMPIRICAL BAYES APPROACH TO COMBINING ESTIMATES OF THE VALUE OF A STATISTICAL LIFE FOR ENVIRONMENTAL POLICY ANALYSIS

    EPA Science Inventory

    This analysis updates EPA's standard VSL estimate by using a more comprehensive collection of VSL studies that include studies published between 1992 and 2000, as well as applying a more appropriate statistical method. We provide a pooled effect VSL estimate by applying the empi...

  10. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  11. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  12. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  13. Method comparison for forest soil carbon and nitrogen estimates in the Delaware River basin

    Treesearch

    B. Xu; Yude Pan; A.H. Johnson; A.F. Plante

    2016-01-01

    The accuracy of forest soil C and N estimates is hampered by forest soils that are rocky, inaccessible, and spatially heterogeneous. A composite coring technique is the standard method used in Forest Inventory and Analysis, but its accuracy has been questioned. Quantitative soil pits provide direct measurement of rock content and soil mass from a larger, more...

  14. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CALCULATING INGESTION EXPOSURE FROM DAY 4 COMPOSITE MEASUREMENTS, THE DIRECT METHOD OF EXPOSURE ESTIMATION (IIT-A-6.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures undertaken for calculating ingestion exposure from Day 4 composite measurements from duplicate diet using the direct method of exposure estimation. This SOP uses data that have been properly coded and certified with appropria...

  15. Field methods and data processing techniques associated with mapped inventory plots

    Treesearch

    William A. Bechtold; Stanley J. Zarnoch

    1999-01-01

    The U.S. Forest Inventory and Analysis (FIA) and Forest Health Monitoring (FHM) programs utilize a fixed-area mapped-plot design as the national standard for extensive forest inventories. The mapped-plot design is explained, as well as the rationale for its selection as the national standard. Ratio-of-means estimators am presented as a method to process data from...

  16. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  17. Age adjustment in ecological studies: using a study on arsenic ingestion and bladder cancer as an example.

    PubMed

    Guo, How-Ran

    2011-10-20

    Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.

  18. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  19. An ion-selective electrode method for determination of chlorine in geological materials

    USGS Publications Warehouse

    Aruscavage, P. J.; Campbell, E.Y.

    1983-01-01

    A method is presented for the determination of chlorine in geological materials, in which a chloride-selective ion electrode is used after decomposition of the sample with hydrofluoric acid and separation of chlorine in a gas-diffusion cell. Data are presented for 30 geological standard materials. The relative standard deviation of the method is estimated to be better than 8% for amounts of chloride of 10 ??g and greater. ?? 1983.

  20. Estimating seasonal evapotranspiration from temporal satellite images

    USGS Publications Warehouse

    Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.

    2012-01-01

    Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.

  1. Estimating disease prevalence in two-phase studies.

    PubMed

    Alonzo, Todd A; Pepe, Margaret Sullivan; Lumley, Thomas

    2003-04-01

    Disease prevalence is ideally estimated using a 'gold standard' to ascertain true disease status on all subjects in a population of interest. In practice, however, the gold standard may be too costly or invasive to be applied to all subjects, in which case a two-phase design is often employed. Phase 1 data consisting of inexpensive and non-invasive screening tests on all study subjects are used to determine the subjects that receive the gold standard in the second phase. Naive estimates of prevalence in two-phase studies can be biased (verification bias). Imputation and re-weighting estimators are often used to avoid this bias. We contrast the forms and attributes of the various prevalence estimators. Distribution theory and simulation studies are used to investigate their bias and efficiency. We conclude that the semiparametric efficient approach is the preferred method for prevalence estimation in two-phase studies. It is more robust and comparable in its efficiency to imputation and other re-weighting estimators. It is also easy to implement. We use this approach to examine the prevalence of depression in adolescents with data from the Great Smoky Mountain Study.

  2. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  3. Expanding the geography of evapotranspiration: An improved method to quantify land-to-air water fluxes in tropical and subtropical regions

    PubMed Central

    Jerszurki, Daniela; Souza, Jorge L. M.; Silva, Lucas C. R.

    2017-01-01

    The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil–as a model system–which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement “d”, mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent. PMID:28658324

  4. Expanding the geography of evapotranspiration: An improved method to quantify land-to-air water fluxes in tropical and subtropical regions.

    PubMed

    Jerszurki, Daniela; Souza, Jorge L M; Silva, Lucas C R

    2017-01-01

    The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil-as a model system-which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement "d", mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent.

  5. Comparison of HPLC & spectrophotometric methods for estimation of antiretroviral drug content in pharmaceutical products.

    PubMed

    Hemanth Kumar, A K; Sudha, V; Swaminathan, Soumya; Ramachandran, Geetha

    2010-10-01

    Simple and reliable methods to estimate drugs in pharmaceutical products are needed. In most cases, antiretroviral drug estimations are performed using a HPLC method, requiring expensive equipment and trained technicians. A relatively simple and accurate method to estimate antiretroviral drugs in pharmaceutical preparations is by spectrophotometric method, which is cheap and simple to use as compared to HPLC. We undertook this study to standardise methods for estimation of nevirapine (NVP), lamivudine (3TC) and stavudine (d4T) in single tablets/capsules by HPLC and spectrophotometry and to compare the content of these drugs determined by both these methods. Twenty tablets/capsules of NVP, 3TC and d4T each were analysed for their drug content by HPLC and spectrophotometric methods. Suitably diluted drug solutions were run on HPLC fitted with a C18 column using UV detection at ambient temperature. The absorbance of the diluted drug solutions were read in a spectrophotometer at 300, 285 and 270 nm for NVP, 3TC and d4T respectively. Pure powders of the drugs were used to prepare calibration standards of known drug concentrations, which was set up with each assay. The inter-day variation (%) of standards for NVP, 3TC and d4T ranged from 2.5 to 6.7, 2.1 to 7.7 and 6.2 to 7.7, respectively by HPLC. The corresponding values by spectrophotometric method were 2.7 to 4.7, 4.2 to 7.2 and 3.8 to 6.0. The per cent variation between the HPLC and spectrophotometric methods ranged from 0.45 to 4.49 per cent, 0 to 4.98 per cent and 0.35 to 8.73 per cent for NVP, 3TC and d4T,respectively. The contents of NVP, 3TC and d4T in the tablets estimated by HPLC and spectrophotometric methods were similar, and the variation in the amount of these drugs estimated by HPLC and spectrophotometric methods was below 10 per cent. This suggests that the spectrophotometric method is as accurate as the HPLC method for estimation of NVP, 3TC and d4T in tablet/capsule. Hence laboratories that do not have HPLC equipment can also undertake these drug estimations using spectrophotometer.

  6. Determination of longitudinal aerodynamic derivatives using flight data from an icing research aircraft

    NASA Technical Reports Server (NTRS)

    Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.

    1989-01-01

    A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.

  7. SOME PROBLEMS OF "SAFE DOSE" ESTIMATION

    EPA Science Inventory

    In environmental carcinogenic risk assessment, the usually defined "safe doses" appear subjective in some sense. n this paper a method of standardizing "safe doses" based on some objective parameters is introduced and a procedure of estimating safe doses under the competing risks...

  8. Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties

    PubMed Central

    Chi, Eric C.; Lange, Kenneth

    2014-01-01

    Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662

  9. [Automated procedure for volumetric measurement of metastases: estimation of tumor burden].

    PubMed

    Fabel, M; Bolte, H

    2008-09-01

    Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation.

  10. Algorithms for spacecraft formation flying navigation based on wireless positioning system measurements

    NASA Astrophysics Data System (ADS)

    Goh, Shu Ting

    Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due to the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft's range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method's error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.

  11. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods.

    PubMed

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J

    2012-09-06

    Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.

  12. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    PubMed Central

    2012-01-01

    Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments. PMID:22954451

  13. Effective classification of the prevalence of Schistosoma mansoni.

    PubMed

    Mitchell, Shira A; Pagano, Marcello

    2012-12-01

    To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.

  14. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  15. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    PubMed

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Immunochemical Investigations of Cell Surface Antigens of Anaerobic Bacteria

    DTIC Science & Technology

    1977-01-15

    sterile cecal contents were included in all inocula, our data indicates * that cecal contents from germ free rats can be used in place of sterile cecal...The void volume of the column was estimated with blue dextran. Molecular size of the un- digested Pool 1 material was estimated using a PM-30 membrane...51) using bovine serum albumin as a standard. Total sugars were measured by the phenol-sulfuric acid method (52) using glucose as a standards

  17. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  18. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  19. Comparison of anatomical, functional and regression methods for estimating the rotation axes of the forearm.

    PubMed

    Fraysse, François; Thewlis, Dominic

    2014-11-07

    Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. 15 CFR 90.8 - Evidence required.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DEPARTMENT OF COMMERCE PROCEDURE FOR CHALLENGING POPULATION ESTIMATES § 90.8 Evidence required. (a) The... the criteria, standards, and regular processes the Census Bureau employs to generate the population... uses a cohort-component of change method to produce population estimates. Each year, the components of...

  1. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  2. Method for estimating low-flow characteristics of ungaged streams in Indiana

    USGS Publications Warehouse

    Arihood, Leslie D.; Glatfelter, Dale R.

    1991-01-01

    Equations for estimating the 7-day, 2-year and 7oday, 10-year low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low-flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow-duration ratio, which is the 20-percent flow duration divided by the 90-percent flow duration. Flow-duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from the plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow-duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low-flow characteristics at 82 gaging stations where flow-duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-year and 7-day, 10-year low flows are 19 and 28 percent. When flow-duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46 and 61 percent. However, when stations having drainage areas of less than 10 square miles are excluded from the test, the standard errors decrease to 38 and 49 percent. Standard errors increase when stations with small basins are included, probably because some of the flow-duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow-duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and central physiographic zones of the State. Low-flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low-flow characteristic can be adjusted. The method is most accurate for sites having drainage areas ranging from 10 to 1,000 square miles and for predictions of 7-day, 10-year low flows ranging from 0.5 to 340 cubic feet per second.

  3. Method for estimating low-flow characteristics of ungaged streams in Indiana

    USGS Publications Warehouse

    Arihood, L.D.; Glatfelter, D.R.

    1986-01-01

    Equations for estimating the 7-day, 2-yr and 7-day, 10-yr low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow duration ratio, which is the 20% flow duration divided by the 90% flow duration. Flow duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from this plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low flow characteristics at 82 gaging stations where flow duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-yr and 7-day, 10-yr low flows are 19% and 28%. When flow duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46% and 61%. However, when stations with drainage areas < 10 sq mi are excluded from the test, the standard errors reduce to 38% and 49%. Standard errors increase when stations with small basins are included, probably because some of the flow duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and the central physiographic zones of the state. Low flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low flow characteristic can be adjusted. The method is most accurate for sites with drainage areas ranging from 10 to 1,000 sq mi and for predictions of 7-day, 10-yr low flows ranging from 0.5 to 340 cu ft/sec. (Author 's abstract)

  4. Financing Higher Standards in Public Education: The Importance of Accounting for Educational Costs. Policy Brief, No. 10.

    ERIC Educational Resources Information Center

    Duncombe, William; Yinger, John

    This policy brief explains why performance focus and educational cost indexes must go hand in hand, discusses alternative methods for estimating educational cost indexes, and shows how these costs indexes can be incorporated into a performance-based state aid program. A shift to educational performance standards, whether these standards are…

  5. Status and analysis of test standard for on-board charger

    NASA Astrophysics Data System (ADS)

    Hou, Shuai; Liu, Haiming; Jiang, Li; Chen, Xichen; Ma, Junjie; Zhao, Bing; Wu, Zaiyuan

    2018-05-01

    This paper analyzes the test standards of on -board charger (OBC). In the process of testing, we found that there exists some problems in test method and functional status, such as failed to follow up the latest test standards, estimated loosely, rectification uncertainty and consistency. Finally, putting forward some own viewpoints on these problems.

  6. Some methods of computing platform transmitter terminal location estimates. [ARGOS system; whale tracking

    NASA Technical Reports Server (NTRS)

    Hoisington, C. M.

    1984-01-01

    A position estimation algorithm was developed to track a humpback whale tagged with an ARGOS platform after a transmitter deployment failure and the whale's diving behavior precluded standard methods. The algorithm is especially useful where a transmitter location program exists; it determines the classical keplarian elements from the ARGOS spacecraft position vectors included with the probationary file messages. A minimum of three distinct messages are required. Once the spacecraft orbit is determined, the whale is located using standard least squares regression techniques. Experience suggests that in instances where circumstances inherent in the experiment yield message data unsuitable for the standard ARGOS reduction, (message data may be too sparse, span an insufficient period, or include variable-length messages). System ARGOS can still provide much valuable location information if the user is willing to accept the increased location uncertainties.

  7. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  8. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  9. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  10. An Approach to Addressing Selection Bias in Survival Analysis

    PubMed Central

    Carlin, Caroline S.; Solid, Craig A.

    2014-01-01

    This work proposes a frailty model that accounts for non-random treatment assignment in survival analysis. Using Monte Carlo simulation, we found that estimated treatment parameters from our proposed endogenous selection survival model (esSurv) closely parallel the consistent two-stage residual inclusion (2SRI) results, while offering computational and interpretive advantages. The esSurv method greatly enhances computational speed relative to 2SRI by eliminating the need for bootstrapped standard errors, and generally results in smaller standard errors than those estimated by 2SRI. In addition, esSurv explicitly estimates the correlation of unobservable factors contributing to both treatment assignment and the outcome of interest, providing an interpretive advantage over the residual parameter estimate in the 2SRI method. Comparisons with commonly used propensity score methods and with a model that does not account for non-random treatment assignment show clear bias in these methods that is not mitigated by increased sample size. We illustrate using actual dialysis patient data comparing mortality of patients with mature arteriovenous grafts for venous access to mortality of patients with grafts placed but not yet ready for use at the initiation of dialysis. We find strong evidence of endogeneity (with estimate of correlation in unobserved factors ρ̂ = 0.55), and estimate a mature-graft hazard ratio of 0.197 in our proposed method, with a similar 0.173 hazard ratio using 2SRI. The 0.630 hazard ratio from a frailty model without a correction for the non-random nature of treatment assignment illustrates the importance of accounting for endogeneity. PMID:24845211

  11. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  13. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models

    PubMed Central

    2011-01-01

    Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598

  14. Support vector machines for nuclear reactor state estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less

  15. Vascular Disease, ESRD, and Death: Interpreting Competing Risk Analyses

    PubMed Central

    Coresh, Josef; Segev, Dorry L.; Kucirka, Lauren M.; Tighiouart, Hocine; Sarnak, Mark J.

    2012-01-01

    Summary Background and objectives Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. Design, setting, participants, & measurements This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989–1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. Results The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20–2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15–2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. Conclusions When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors. PMID:22859747

  16. Vascular disease, ESRD, and death: interpreting competing risk analyses.

    PubMed

    Grams, Morgan E; Coresh, Josef; Segev, Dorry L; Kucirka, Lauren M; Tighiouart, Hocine; Sarnak, Mark J

    2012-10-01

    Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989-1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20-2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15-2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors.

  17. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  18. Assessment of in silico methods to estimate aquatic species sensitivity

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...

  19. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    PubMed

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  20. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance

    PubMed Central

    Zheng, Binqi; Yuan, Xiaobing

    2018-01-01

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, L.; Hill, W.J.

    A method is proposed to estimate the effect of long-term variations in total ozone on the error incurred in determining a trend in total ozone due to man-made effects. When this method is applied to data from Arosa, Switzerland over the years 1932--1980, a component of the standard error of the trend estimate equal to 0.6 percent per decade is obtained. If this estimate of long-term trend variability at Arosa is not too different from global long-term trend variability, then the threshold ( +- 2 standard errors) for detecting an ozone trend in the 1970's that is outside of whatmore » could be expected from natural variation alone and hence be man-made would range from 1.35% (Reinsel et al, 1981) to 1.8%. The latter value is obtained by combining the Reinsel et al result with the result here, assuming that the error variations that both studies measure are independent and additive. Estimates for long-term trend variation over other time periods are also derived. Simulations that measure the precision of the estimate of long-term variability are reported.« less

  2. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  3. Comparison of anchor-based and distributional approaches in estimating important difference in common cold.

    PubMed

    Barrett, Bruce; Brown, Roger; Mundt, Marlon

    2008-02-01

    Evaluative health-related quality-of-life instruments used in clinical trials should be able to detect small but important changes in health status. Several approaches to minimal important difference (MID) and responsiveness have been developed. To compare anchor-based and distributional approaches to important difference and responsiveness for the Wisconsin Upper Respiratory Symptom Survey (WURSS), an illness-specific quality of life outcomes instrument. Participants with community-acquired colds self-reported daily using the WURSS-44. Distribution-based methods calculated standardized effect size (ES) and standard error of measurement (SEM). Anchor-based methods compared daily interval changes to global ratings of change, using: (1) standard MID methods based on correspondence to ratings of "a little better" or "somewhat better," and (2) two-level multivariate regression models. About 150 adults were monitored throughout their colds (1,681 sick days.): 88% were white, 69% were women, and 50% had completed college. The mean age was 35.5 years (SD = 14.7). WURSS scores increased 2.2 points from the first to second day, and then dropped by an average of 8.2 points per day from days 2 to 7. The SEM averaged 9.1 during these 7 days. Standard methods yielded a between day MID of 22 points. Regression models of MID projected 11.3-point daily changes. Dividing these estimates of small-but-important-difference by pooled SDs yielded coefficients of .425 for standard MID, .218 for regression model, .177 for SEM, and .157 for ES. These imply per-group sample sizes of 870 using ES, 616 for SEM, 302 for regression model, and 89 for standard MID, assuming alpha = .05, beta = .20 (80% power), and two-tailed testing. Distribution and anchor-based approaches provide somewhat different estimates of small but important difference, which in turn can have substantial impact on trial design.

  4. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  5. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  6. Estimation of the neural drive to the muscle from surface electromyograms

    NASA Astrophysics Data System (ADS)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  7. A method for the estimate of the wall diffusion for non-axisymmetric fields using rotating external fields

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.

    2013-08-01

    A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.

  8. Spatial Estimation of Sub-Hour Global Horizontal Irradiance Based on Official Observations and Remote Sensors

    PubMed Central

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-01-01

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102

  9. Spatial estimation of sub-hour Global Horizontal Irradiance based on official observations and remote sensors.

    PubMed

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-04-11

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).

  10. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  11. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    PubMed Central

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  12. Noninvasive quantification of cerebral metabolic rate for glucose in rats using (18)F-FDG PET and standard input function.

    PubMed

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-10-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.

  13. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  14. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  15. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  16. Missing portion sizes in FFQ--alternatives to use of standard portions.

    PubMed

    Køster-Rasmussen, Rasmus; Siersma, Volkert; Halldorsson, Thorhallur I; de Fine Olivarius, Niels; Henriksen, Jan E; Heitmann, Berit L

    2015-08-01

    Standard portions or substitution of missing portion sizes with medians may generate bias when quantifying the dietary intake from FFQ. The present study compared four different methods to include portion sizes in FFQ. We evaluated three stochastic methods for imputation of portion sizes based on information about anthropometry, sex, physical activity and age. Energy intakes computed with standard portion sizes, defined as sex-specific medians (median), or with portion sizes estimated with multinomial logistic regression (MLR), 'comparable categories' (Coca) or k-nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). The Danish Health Examination Survey 2007-2008. The study included 3728 adults with complete portion size data. Compared with the reference, the root-mean-square errors of the mean daily total energy intake (in kJ) computed with portion sizes estimated by the four methods were (men; women): median (1118; 1061), MLR (1060; 1051), Coca (1230; 1146), KNN (1281; 1181). The equivalent biases (mean error) were (in kJ): median (579; 469), MLR (248; 178), Coca (234; 188), KNN (-340; 218). The methods MLR and Coca provided the best agreement with the reference. The stochastic methods allowed for estimation of meaningful portion sizes by conditioning on information about physiology and they were suitable for multiple imputation. We propose to use MLR or Coca to substitute missing portion size values or when portion sizes needs to be included in FFQ without portion size data.

  17. Sensitivity of the ISO 6579:2002/Amd 1:2007 Standard Method for Detection of Salmonella spp. on Mesenteric Lymph Nodes from Slaughter Pigs

    PubMed Central

    Mainar-Jaime, R. C.; Andrés, S.; Vico, J. P.; San Román, B.; Garrido, V.

    2013-01-01

    The ISO 6579:2002/Amd 1:2007 (ISO) standard has been the bacteriological standard method used in the European Union for the detection of Salmonella spp. in pig mesenteric lymph nodes (MLN), but there are no published estimates of the diagnostic sensitivity (Se) of the method in this matrix. Here, the Se of the ISO (SeISO) was estimated on 675 samples selected from two populations with different Salmonella prevalences (14 farms with a ≥20% prevalence and 13 farms with a <20% prevalence) and through the use of latent-class models in concert with Bayesian inference, assuming 100% ISO specificity, and an invA-based PCR as the second diagnostic method. The SeISO was estimated to be close to 87%, while the sensitivity of the PCR reached up to 83.6% and its specificity was 97.4%. Interestingly, the bacteriological reanalysis of 33 potential false-negative (PCR-positive) samples allowed isolation of 19 (57.5%) new Salmonella strains, improving the overall diagnostic accuracy of the bacteriology. Considering the usual limitations of bacteriology regarding Se, these results support the adequacy of the ISO for the detection of Salmonella spp. from MLN and also that of the PCR-based method as an alternative or complementary (screening) test for the diagnosis of pig salmonellosis, particularly considering the cost and time benefits of the molecular procedure. PMID:23100334

  18. Estimating the Health and Economic Impacts of Changes in Local Air Quality

    PubMed Central

    Carvour, Martha L.; Hughes, Amy E.; Fann, Neal

    2018-01-01

    Objectives. To demonstrate the benefits-mapping software Environmental Benefits Mapping and Analysis Program-Community Edition (BenMAP-CE), which integrates local air quality data with previously published concentration–response and health–economic valuation functions to estimate the health effects of changes in air pollution levels and their economic consequences. Methods. We illustrate a local health impact assessment of ozone changes in the 10-county nonattainment area of the Dallas–Fort Worth region of Texas, estimating the short-term effects on mortality predicted by 2 scenarios for 3 years (2008, 2011, and 2013): an incremental rollback of the daily 8-hour maximum ozone levels of all area monitors by 10 parts per billion and a rollback-to-a-standard ambient level of 65 parts per billion at only monitors above that level. Results. Estimates of preventable premature deaths attributable to ozone air pollution obtained by the incremental rollback method varied little by year, whereas those obtained by the rollback-to-a-standard method varied by year and were sensitive to the choice of ordinality and the use of preloaded or imported data. Conclusions. BenMAP-CE allows local and regional public health analysts to generate timely, evidence-based estimates of the health impacts and economic consequences of potential policy options in their communities. PMID:29698094

  19. Research on Estimates of Xi’an City Life Garbage Pay-As-You-Throw Based on Two-part Tariff method

    NASA Astrophysics Data System (ADS)

    Yaobo, Shi; Xinxin, Zhao; Fuli, Zheng

    2017-05-01

    Domestic waste whose pricing can’t be separated from the pricing of public economics category is quasi public goods. Based on Two-part Tariff method on urban public utilities, this paper designs the pricing model in order to match the charging method and estimates the standard of pay-as-you-throw using data of the past five years in Xi’an. Finally, this paper summarizes the main results and proposes corresponding policy recommendations.

  20. An accurate computational method for the diffusion regime verification

    NASA Astrophysics Data System (ADS)

    Zhokh, Alexey A.; Strizhak, Peter E.

    2018-04-01

    The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.

  1. Rapid microbiological assay of serum vitamin B12 by electronic counter

    PubMed Central

    Stuart, J.; Sklaroff, S. A.

    1966-01-01

    A new method of measuring the growth of Lactobacillus leichmannii is reported. Its adoption for the estimation of serum vitamin B12 levels shortens the incubation period required to five hours at 45°C. The method is compared statistically with a standard method of estimation, requiring incubation at 37°C., by duplicate determinations on 106 hospital patients. The significance of the apparently decreased accuracy of the new method at low serum levels is discussed, and a re-appraisal of the optimum growth temperature of Lactobacillus leichmannii suggested. PMID:5904982

  2. Rapid microbiological assay of serum vitamin B 12 by electronic counter.

    PubMed

    Stuart, J; Sklaroff, S A

    1966-01-01

    A new method of measuring the growth of Lactobacillus leichmannii is reported. Its adoption for the estimation of serum vitamin B(12) levels shortens the incubation period required to five hours at 45 degrees C. The method is compared statistically with a standard method of estimation, requiring incubation at 37 degrees C., by duplicate determinations on 106 hospital patients. The significance of the apparently decreased accuracy of the new method at low serum levels is discussed, and a re-appraisal of the optimum growth temperature of Lactobacillus leichmannii suggested.

  3. Method to improve accuracy of positioning object by eLoran system with applying standard Kalman filter

    NASA Astrophysics Data System (ADS)

    Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.

    2018-05-01

    This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.

  4. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Estimate of the critical exponents from the field-theoretical renormalization group: mathematical meaning of the 'Standard Values'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogorelov, A. A.; Suslov, I. M.

    2008-06-15

    New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less

  6. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  7. June and August median streamflows estimated for ungaged streams in southern Maine

    USGS Publications Warehouse

    Lombard, Pamela J.

    2010-01-01

    Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.

  8. Mutual information estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.

  9. AOAC SMPR 2015.009: Estimation of total phenolic content using Folin-C Assay

    USDA-ARS?s Scientific Manuscript database

    This AOAC Standard Method Performance Requirements (SMPR) is for estimation of total soluble phenolic content in dietary supplement raw materials and finished products using the Folin-C assay for comparison within same matrices. SMPRs describe the minimum recommended performance characteristics to b...

  10. A method for calibrating pH meters using standard solutions with low electrical conductivity

    NASA Astrophysics Data System (ADS)

    Rodionov, A. K.

    2011-07-01

    A procedure for obtaining standard solutions with low electrical conductivity that reproduce pH values both in acid and alkali regions is proposed. Estimates of the maximal possible error of reproducing the pH values of these solutions are obtained.

  11. Methods for estimating comparable prevalence rates of food insecurity experienced by adults in 147 countries and areas

    NASA Astrophysics Data System (ADS)

    Nord, Mark; Cafiero, Carlo; Viviani, Sara

    2016-11-01

    Statistical methods based on item response theory are applied to experiential food insecurity survey data from 147 countries, areas, and territories to assess data quality and develop methods to estimate national prevalence rates of moderate and severe food insecurity at equal levels of severity across countries. Data were collected from nationally representative samples of 1,000 adults in each country. A Rasch-model-based scale was estimated for each country, and data were assessed for consistency with model assumptions. A global reference scale was calculated based on item parameters from all countries. Each country's scale was adjusted to the global standard, allowing for up to 3 of the 8 scale items to be considered unique in that country if their deviance from the global standard exceeded a set tolerance. With very few exceptions, data from all countries were sufficiently consistent with model assumptions to constitute reasonably reliable measures of food insecurity and were adjustable to the global standard with fair confidence. National prevalence rates of moderate-or-severe food insecurity assessed over a 12-month recall period ranged from 3 percent to 92 percent. The correlations of national prevalence rates with national income, health, and well-being indicators provide external validation of the food security measure.

  12. Age-structured mark-recapture analysis: A virtual-population-analysis-based model for analyzing age-structured capture-recapture data

    USGS Publications Warehouse

    Coggins, L.G.; Pine, William E.; Walters, C.J.; Martell, S.J.D.

    2006-01-01

    We present a new model to estimate capture probabilities, survival, abundance, and recruitment using traditional Jolly-Seber capture-recapture methods within a standard fisheries virtual population analysis framework. This approach compares the numbers of marked and unmarked fish at age captured in each year of sampling with predictions based on estimated vulnerabilities and abundance in a likelihood function. Recruitment to the earliest age at which fish can be tagged is estimated by using a virtual population analysis method to back-calculate the expected numbers of unmarked fish at risk of capture. By using information from both marked and unmarked animals in a standard fisheries age structure framework, this approach is well suited to the sparse data situations common in long-term capture-recapture programs with variable sampling effort. ?? Copyright by the American Fisheries Society 2006.

  13. A photometric method for the estimation of the oil yield of oil shale

    USGS Publications Warehouse

    Cuttitta, Frank

    1951-01-01

    A method is presented for the distillation and photometric estimation of the oil yield of oil-bearing shales. The oil shale is distilled in a closed test tube and the oil extracted with toluene. The optical density of the toluene extract is used in the estimation of oil content and is converted to percentage of oil by reference to a standard curve. This curve is obtained by relating the oil yields determined by the Fischer assay method to the optical density of the toluene extract of the oil evolved by the new procedure. The new method gives results similar to those obtained by the Fischer assay method in a much shorter time. The applicability of the new method to oil-bearing shale and phosphatic shale has been tested.

  14. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  15. Determination of antenna factors using a three-antenna method at open-field test site

    NASA Astrophysics Data System (ADS)

    Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao

    1992-09-01

    Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.

  16. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.

    PubMed

    Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang

    2015-11-13

    Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.

  17. Adaptive quantification and longitudinal analysis of pulmonary emphysema with a hidden Markov measure field model.

    PubMed

    Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F

    2014-07-01

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.

  18. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations.

    PubMed

    Kunz, Cornelia U; Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2017-03-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under- or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re-assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one-sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. © 2016 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations

    PubMed Central

    Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2016-01-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. PMID:27886393

  20. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    PubMed

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  1. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  2. Comparison of mode estimation methods and application in molecular clock analysis

    NASA Technical Reports Server (NTRS)

    Hedges, S. Blair; Shah, Prachi

    2003-01-01

    BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.

  3. Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions.

    PubMed

    Thomas, Jaime; Avellar, Sarah A; Deke, John; Gleason, Philip

    2017-06-01

    Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods-matching and regression adjustment-to account for preexisting differences between intervention and comparison groups. A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.

  4. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  5. Battery Calendar Life Estimator Manual Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon P. Christophersen; Ira Bloom; Ed Thomas

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  6. Battery Life Estimator Manual Linear Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon P. Christophersen; Ira Bloom; Ed Thomas

    2009-08-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  7. Analytical evaluation of current starch methods used in the international sugar industry: Part I.

    PubMed

    Cole, Marsha; Eggleston, Gillian; Triplett, Alexa

    2017-08-01

    Several analytical starch methods exist in the international sugar industry to mitigate starch-related processing challenges and assess the quality of traded end-products. These methods use iodometric chemistry, mostly potato starch standards, and utilize similar solubilization strategies, but had not been comprehensively compared. In this study, industrial starch methods were compared to the USDA Starch Research method using simulated raw sugars. Type of starch standard, solubilization approach, iodometric reagents, and wavelength detection affected total starch determination in simulated raw sugars. Simulated sugars containing potato starch were more accurately detected by the industrial methods, whereas those containing corn starch, a better model for sugarcane starch, were only accurately measured by the USDA Starch Research method. Use of a potato starch standard curve over-estimated starch concentrations. Among the variables studied, starch standard, solubilization approach, and wavelength detection affected the sensitivity, accuracy/precision, and limited the detection/quantification of the current industry starch methods the most. Published by Elsevier Ltd.

  8. Estimating chlorophyll content and photochemical yield of photosystem II (ΦPSII) using solar-induced chlorophyll fluorescence measurements at different growing stages of attached leaves

    PubMed Central

    Tubuxin, Bayaer; Rahimzadeh-Bajgiran, Parinaz; Ginnan, Yusaku; Hosoi, Fumiki; Omasa, Kenji

    2015-01-01

    This paper illustrates the possibility of measuring chlorophyll (Chl) content and Chl fluorescence parameters by the solar-induced Chl fluorescence (SIF) method using the Fraunhofer line depth (FLD) principle, and compares the results with the standard measurement methods. A high-spectral resolution HR2000+ and an ordinary USB4000 spectrometer were used to measure leaf reflectance under solar and artificial light, respectively, to estimate Chl fluorescence. Using leaves of Capsicum annuum cv. ‘Sven’ (paprika), the relationships between the Chl content and the steady-state Chl fluorescence near oxygen absorption bands of O2B (686nm) and O2A (760nm), measured under artificial and solar light at different growing stages of leaves, were evaluated. The Chl fluorescence yields of ΦF 686nm/ΦF 760nm ratios obtained from both methods correlated well with the Chl content (steady-state solar light: R2 = 0.73; artificial light: R2 = 0.94). The SIF method was less accurate for Chl content estimation when Chl content was high. The steady-state solar-induced Chl fluorescence yield ratio correlated very well with the artificial-light-induced one (R2 = 0.84). A new methodology is then presented to estimate photochemical yield of photosystem II (ΦPSII) from the SIF measurements, which was verified against the standard Chl fluorescence measurement method (pulse-amplitude modulated method). The high coefficient of determination (R2 = 0.74) between the ΦPSII of the two methods shows that photosynthesis process parameters can be successfully estimated using the presented methodology. PMID:26071530

  9. Accuracy of visual assessments of proliferation indices in gastroenteropancreatic neuroendocrine tumours.

    PubMed

    Young, Helen T M; Carr, Norman J; Green, Bryan; Tilley, Charles; Bhargava, Vidhi; Pearce, Neil

    2013-08-01

    To compare the accuracy of eyeball estimates of the Ki-67 proliferation index (PI) with formal counting of 2000 cells as recommend by the Royal College of Pathologists. Sections from gastroenteropancreatic neuroendocrine tumours were immunostained for Ki-67. PI was calculated using three methods: (1) a manual tally count of 2000 cells from the area of highest nuclear labelling using a microscope eyepiece graticule; (2) eyeball estimates made by four pathologists within the same area of highest nuclear labelling; and (3) image analysis of microscope photographs taken from this area using the ImageJ 'cell counter' tool. ImageJ analysis was considered the gold standard for comparison. Levels of agreement between methods were evaluated using Bland-Altman plots. Agreement between the manual tally and ImageJ assessments was very high at low PIs. Agreement between eyeball assessments and ImageJ analysis varied between pathologists. Where data for low PIs alone were analysed, there was a moderate level of agreement between pathologists' estimates and the gold standard, but when all data were included, agreement was poor. Manual tally counts of 2000 cells exhibited similar levels of accuracy to the gold standard, especially at low PIs. Eyeball estimates were significantly less accurate than the gold standard. This suggests that tumour grades may be misclassified by eyeballing and that formal tally counting of positive cells produces more reliable results. Further studies are needed to identify accurate clinically appropriate ways of calculating.

  10. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  11. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  12. Regional Variation in Use of Complementary Health Approaches by U.S. Adults

    MedlinePlus

    ... part of their yoga exercise. Data sources and methods Data from the 2012 NHIS were used for ... sampling design of NHIS. The Taylor series linearization method was chosen for estimation of standard errors. Differences ...

  13. A dynamic programming approach to estimate the capacity value of energy storage

    DOE PAGES

    Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul

    2013-09-17

    Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less

  14. Bone orientation and position estimation errors using Cosserat point elements and least squares methods: Application to gait.

    PubMed

    Solav, Dana; Camomilla, Valentina; Cereatti, Andrea; Barré, Arnaud; Aminian, Kamiar; Wolf, Alon

    2017-09-06

    The aim of this study was to analyze the accuracy of bone pose estimation based on sub-clusters of three skin-markers characterized by triangular Cosserat point elements (TCPEs) and to evaluate the capability of four instantaneous physical parameters, which can be measured non-invasively in vivo, to identify the most accurate TCPEs. Moreover, TCPE pose estimations were compared with the estimations of two least squares minimization methods applied to the cluster of all markers, using rigid body (RBLS) and homogeneous deformation (HDLS) assumptions. Analysis was performed on previously collected in vivo treadmill gait data composed of simultaneous measurements of the gold-standard bone pose by bi-plane fluoroscopy tracking the subjects' knee prosthesis and a stereophotogrammetric system tracking skin-markers affected by soft tissue artifact. Femur orientation and position errors estimated from skin-marker clusters were computed for 18 subjects using clusters of up to 35 markers. Results based on gold-standard data revealed that instantaneous subsets of TCPEs exist which estimate the femur pose with reasonable accuracy (median root mean square error during stance/swing: 1.4/2.8deg for orientation, 1.5/4.2mm for position). A non-invasive and instantaneous criteria to select accurate TCPEs for pose estimation (4.8/7.3deg, 5.8/12.3mm), was compared with RBLS (4.3/6.6deg, 6.9/16.6mm) and HDLS (4.6/7.6deg, 6.7/12.5mm). Accounting for homogeneous deformation, using HDLS or selected TCPEs, yielded more accurate position estimations than RBLS method, which, conversely, yielded more accurate orientation estimations. Further investigation is required to devise effective criteria for cluster selection that could represent a significant improvement in bone pose estimation accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. On the Estimation of Standard Errors in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim

    2018-01-01

    Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…

  16. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations

    ERIC Educational Resources Information Center

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.

    2016-01-01

    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  17. Evaluating the Reliability, Validity, and Usefulness of Education Cost Studies

    ERIC Educational Resources Information Center

    Baker, Bruce D.

    2006-01-01

    Recent studies that purport to estimate the costs of constitutionally adequate education have been described as either a "gold standard" that should guide legislative school finance policy design and judicial evaluation, or as pure "alchemy." Methods for estimating the cost of constitutionally adequate education can be roughly…

  18. Intrajudge Consistency Using the Angoff Standard-Setting Method.

    ERIC Educational Resources Information Center

    Plake, Barbara S.; Impara, James C.

    This study investigated the intrajudge consistency of Angoff-based item performance estimates. The examination used was a certification examination in an emergency medicine specialty. Ten expert panelists rated the same 24 items twice during an operational standard setting study. Results indicate that the panelists were highly consistent, in terms…

  19. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  20. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis.

    PubMed

    Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard

    2009-07-22

    Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.

  1. A Comparative Study of the Applied Methods for Estimating Deflection of the Vertical in Terrestrial Geodetic Measurements

    PubMed Central

    Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien

    2016-01-01

    This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544

  2. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  3. Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data

    NASA Astrophysics Data System (ADS)

    Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.

    2015-06-01

    In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.

  4. Efficiency and precision for estimating timber and non-timber attributes using Landsat-based stratification methods in two-phase sampling in northwest California

    Treesearch

    Antti T. Kaartinen; Jeremy S. Fried; Paul A. Dunham

    2002-01-01

    Three Landsat TM-based GIS layers were evaluated as alternatives to conventional, photointerpretation-based stratification of FIA field plots. Estimates for timberland area, timber volume, and volume of down wood were calculated for California's North Coast Survey Unit of 2.5 million hectares. The estimates were compared on the basis of standard errors,...

  5. Estimating the Minimum Number of Judges Required for Test-Centred Standard Setting on Written Assessments. Do Discussion and Iteration Have an Influence?

    ERIC Educational Resources Information Center

    Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.

    2008-01-01

    Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…

  6. Titrimetric and photometric methods for determination of hypochlorite in commercial bleaches.

    PubMed

    Jonnalagadda, Sreekanth B; Gengan, Prabhashini

    2010-01-01

    Two methods, simple titration and photometric methods for determination of hypochlorite are developed, based its reaction with hydrogen peroxide and titration of the residual peroxide by acidic permanganate. In the titration method, the residual hydrogen peroxide is estimated by titration with standard permanganate solution to estimate the hypochlorite concentration. The photometric method is devised to measure the concentration of remaining permanganate, after the reaction with residual hydrogen peroxide. It employs 4 ranges of calibration curves to enable the determination of hypochlorite accurately. The new photometric method measures hypochlorite in the range 1.90 x 10(-3) to 1.90 x 10(-2) M, with high accuracy and with low variance. The concentrations of hypochlorite in diverse commercial bleach samples and in seawater which is enriched with hypochlorite were estimated using the proposed method and compared with the arsenite method. The statistical analysis validates the superiority of the proposed method.

  7. Age estimation standards for a Western Australian population using the coronal pulp cavity index.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2013-09-10

    Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Transient Stability Output Margin Estimation Based on Energy Function Method

    NASA Astrophysics Data System (ADS)

    Miwa, Natsuki; Tanaka, Kazuyuki

    In this paper, a new method of estimating critical generation margin (CGM) in power systems is proposed from the viewpoint of transient stability diagnostic. The proposed method has the capability to directly compute the stability limit output for a given contingency based on transient energy function method (TEF). Since CGM can be directly obtained by the limit output using estimated P-θ curves and is easy to understand, it is more useful rather than conventional critical clearing time (CCT) of energy function method. The proposed method can also estimate CGM as its negative value that means unstable in present load profile, then negative CGM can be directly utilized as generator output restriction. The proposed method is verified its accuracy and fast solution ability by applying to simple 3-machine model and IEEJ EAST10-machine standard model. Furthermore the useful application to severity ranking of transient stability for a lot of contingency cases is discussed by using CGM.

  9. An extension of the Saltykov method to quantify 3D grain size distributions in mylonites

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio

    2016-12-01

    The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.

  10. PubMed Central

    TATARELLI, P.; LORENZI, I.; CAVIGLIA, I.; SACCO, R.A.; LA MASA, D.

    2016-01-01

    Summary Introduction. Hand decontamination with alcohol-based antiseptic agents is considered the best practise to reduce healthcare associated infections. We present a new method to monitor hand hygiene, introduced in a tertiary care pediatric hospital in Northern Italy, which estimates the mean number of daily hand decontamination procedures performed per patient. Methods. The total amount of isopropyl alcohol and chlorhexidine solution supplied in a trimester to each hospital ward was put in relation with the number of hospitalization days, and expressed as litres/1000 hospitalization-days (World Health Organization standard method). Moreover, the ratio between the total volume of hand hygiene products supplied and the effective amount of hand disinfection product needed for a correct procedure was calculated. Then, this number was divided by 90 (days in a quarter) and then by the mean number of bed active in each day in a Unit, resulting in the mean estimated number of hand hygiene procedures per patient per day (new method). Results. The two methods had similar performance for estimating the adherence to correct hand disinfection procedures. The new method identified wards and/or periods with high or low adherence to the procedure and indicated where to perform interventions and their effectiveness. The new method could result easy-to understand also for non-infection control experts. Conclusions. This method can help non-infection control experts to understand adherence to correct hand-hygiene procedures and improve quality standards. PMID:28167854

  11. Paediatric secondary intraocular lens estimation from the aphakic refraction alone: comparison with a standard biometric technique.

    PubMed

    Khan, A O; AlGaeed, A

    2006-12-01

    To compare the following two methods of paediatric secondary posterior chamber intraocular lens (PCIOL) determination with the Holladay formula: (1) estimation from the aphakic refraction alone (using assumed keratometry (K) of 44 diopters); and (2) calculation based on preoperative measured biometry. (1) Retrospective medical record review in a referral eye hospital of children with aphakia aged < or =12 years who underwent secondary PCIOL implantation with an Alcon MA60BM lens; (2) PCIOL determination for a plano refraction by the above two methods (estimation and calculation); and (3) prediction of pseudophakic refraction for the PCIOL actually implanted by the above two methods compared with the actual pseudophakic refraction. 50 eyes of 30 children with aphakia were studied. The estimated (mean, 95% confidence interval (CI)) secondary PCIOL values (25.81, +/-1.65 D) and the calculated secondary PCIOL values (26.35, +/-1.50 D) were not significantly different (mean absolute value of the difference 1.86 D, 95% CI +/-0.41 D) by the two-tailed paired t test at alpha = 0.05 (p = 0.11). For each eye, the pseudophakic refractions predicted by the two methods for the PCIOL that was actually implanted differed, both from each other and from the actual pseudophakic refraction (repeated-measures analysis of variance, p<0.001; Tukey test, p<0.01). The method of PCIOL estimation from the aphakic refraction alone provides values similar to those obtained by a standard technique and can be useful if biometry is unavailable. Targeting a pseudophakic refraction in paediatric aphakia is prone to error.

  12. Probabilistic Air Segmentation and Sparse Regression Estimated Pseudo CT for PET/MR Attenuation Correction

    PubMed Central

    Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David

    2015-01-01

    Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778

  13. Probabilistic analysis of the eight-hour-averaged CO impacts of highways.

    DOT National Transportation Integrated Search

    1980-01-01

    This report describes a method for estimating the probability that a highway facility will violate the eight hour National Ambient Air Quality Standard (NAAQS) for carbon monoxide (CO). The method is predicated on the assumption that overlapping eigh...

  14. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  15. Adaptation of the chevron-notch beam fracture toughness method to specimens harvested from diesel particulate filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wereszczak, Andrew; Jadaan, Osama; Modugno, Max

    In this paper, the apparent fracture toughness of a porous cordierite ceramic was estimated using a large specimen whose geometry was inspired by the ASTM-C1421-standardized chevron-notch beam. In this paper, using the same combination of experiment and analysis used to develop the standardized chevron-notch test for small, monolithic ceramic bend bars, an apparent fracture toughness of 0.6 and 0.9 MPa√m were estimated for an unaged and aged cordierite diesel particulate filter structure, respectively. Finally, the effectiveness and simplicity of this adapted specimen geometry and test method lends itself to the evaluation of (macroscopic) apparent fracture toughness of an entire porous-ceramic,more » diesel particulate filter structure.« less

  16. Adaptation of the chevron-notch beam fracture toughness method to specimens harvested from diesel particulate filters

    DOE PAGES

    Wereszczak, Andrew; Jadaan, Osama; Modugno, Max; ...

    2017-01-18

    In this paper, the apparent fracture toughness of a porous cordierite ceramic was estimated using a large specimen whose geometry was inspired by the ASTM-C1421-standardized chevron-notch beam. In this paper, using the same combination of experiment and analysis used to develop the standardized chevron-notch test for small, monolithic ceramic bend bars, an apparent fracture toughness of 0.6 and 0.9 MPa√m were estimated for an unaged and aged cordierite diesel particulate filter structure, respectively. Finally, the effectiveness and simplicity of this adapted specimen geometry and test method lends itself to the evaluation of (macroscopic) apparent fracture toughness of an entire porous-ceramic,more » diesel particulate filter structure.« less

  17. The magnitude of variability produced by methods used to estimate annual stormwater contaminant loads for highly urbanised catchments.

    PubMed

    Beck, H J; Birch, G F

    2013-06-01

    Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.

  18. Habitat complexity and fish size affect the detection of Indo-Pacific lionfish on invaded coral reefs

    NASA Astrophysics Data System (ADS)

    Green, S. J.; Tamburello, N.; Miller, S. E.; Akins, J. L.; Côté, I. M.

    2013-06-01

    A standard approach to improving the accuracy of reef fish population estimates derived from underwater visual censuses (UVCs) is the application of species-specific correction factors, which assumes that a species' detectability is constant under all conditions. To test this assumption, we quantified detection rates for invasive Indo-Pacific lionfish ( Pterois volitans and P. miles), which are now a primary threat to coral reef conservation throughout the Caribbean. Estimates of lionfish population density and distribution, which are essential for managing the invasion, are currently obtained through standard UVCs. Using two conventional UVC methods, the belt transect and stationary visual census (SVC), we assessed how lionfish detection rates vary with lionfish body size and habitat complexity (measured as rugosity) on invaded continuous and patch reefs off Cape Eleuthera, the Bahamas. Belt transect and SVC surveys performed equally poorly, with both methods failing to detect the presence of lionfish in >50 % of surveys where thorough, lionfish-focussed searches yielded one or more individuals. Conventional methods underestimated lionfish biomass by ~200 %. Crucially, detection rate varied significantly with both lionfish size and reef rugosity, indicating that the application of a single correction factor across habitats and stages of invasion is unlikely to accurately characterize local populations. Applying variable correction factors that account for site-specific lionfish size and rugosity to conventional survey data increased estimates of lionfish biomass, but these remained significantly lower than actual biomass. To increase the accuracy and reliability of estimates of lionfish density and distribution, monitoring programs should use detailed area searches rather than standard visual survey methods. Our study highlights the importance of accounting for sources of spatial and temporal variation in detection to increase the accuracy of survey data from coral reef systems.

  19. An assessment of air pollutant exposure methods in Mexico City, Mexico.

    PubMed

    Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S

    2015-05-01

    Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.

  20. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  1. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  2. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  3. Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials

    PubMed Central

    Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.

    2013-01-01

    Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072

  4. The biologic error in gestational length related to the use of the first day of last menstrual period as a proxy for the start of pregnancy.

    PubMed

    Nakling, Jakob; Buhaug, Harald; Backe, Bjorn

    2005-10-01

    In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.

  5. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  6. Using rank-order geostatistics for spatial interpolation of highly skewed data in a heavy-metal contaminated site.

    PubMed

    Juang, K W; Lee, D Y; Ellsworth, T R

    2001-01-01

    The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.

  7. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    PubMed

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  8. Noise estimation for hyperspectral imagery using spectral unmixing and synthesis

    NASA Astrophysics Data System (ADS)

    Demirkesen, C.; Leloglu, Ugur M.

    2014-10-01

    Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.

  9. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  10. Random forests of interaction trees for estimating individualized treatment effects in randomized trials.

    PubMed

    Su, Xiaogang; Peña, Annette T; Liu, Lei; Levine, Richard A

    2018-04-29

    Assessing heterogeneous treatment effects is a growing interest in advancing precision medicine. Individualized treatment effects (ITEs) play a critical role in such an endeavor. Concerning experimental data collected from randomized trials, we put forward a method, termed random forests of interaction trees (RFIT), for estimating ITE on the basis of interaction trees. To this end, we propose a smooth sigmoid surrogate method, as an alternative to greedy search, to speed up tree construction. The RFIT outperforms the "separate regression" approach in estimating ITE. Furthermore, standard errors for the estimated ITE via RFIT are obtained with the infinitesimal jackknife method. We assess and illustrate the use of RFIT via both simulation and the analysis of data from an acupuncture headache trial. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  12. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    PubMed

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  13. Analysis of vitamin K1 in fruits and vegetables using accelerated solvent extraction and liquid chromatography tandem mass spectrometry with atmospheric pressure chemical ionization.

    PubMed

    Jäpelt, Rie Bak; Jakobsen, Jette

    2016-02-01

    The objective of this study was to develop a rapid, sensitive, and specific analytical method to study vitamin K1 in fruits and vegetables. Accelerated solvent extraction and solid phase extraction was used for sample preparation. Quantification was done by liquid chromatography tandem mass spectrometry with atmospheric pressure chemical ionization in selected reaction monitoring mode with deuterium-labeled vitamin K1 as an internal standard. The precision was estimated as the pooled estimate of three replicates performed on three different days for spinach, peas, apples, banana, and beetroot. The repeatability was 5.2% and the internal reproducibility was 6.2%. Recovery was in the range 90-120%. No significant difference was observed between the results obtained by the present method and by a method using the same principle as the CEN-standard i.e. liquid-liquid extraction and post-column zinc reduction with fluorescence detection. Limit of quantification was estimated to 0.05 μg/100g fresh weight. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  15. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    PubMed

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Patient-specific lean body mass can be estimated from limited-coverage computed tomography images.

    PubMed

    Devriese, Joke; Beels, Laurence; Maes, Alex; van de Wiele, Christophe; Pottel, Hans

    2018-06-01

    In PET/CT, quantitative evaluation of tumour metabolic activity is possible through standardized uptake values, usually normalized for body weight (BW) or lean body mass (LBM). Patient-specific LBM can be estimated from whole-body (WB) CT images. As most clinical indications only warrant PET/CT examinations covering head to midthigh, the aim of this study was to develop a simple and reliable method to estimate LBM from limited-coverage (LC) CT images and test its validity. Head-to-toe PET/CT examinations were retrospectively retrieved and semiautomatically segmented into tissue types based on thresholding of CT Hounsfield units. LC was obtained by omitting image slices. Image segmentation was validated on the WB CT examinations by comparing CT-estimated BW with actual BW, and LBM estimated from LC images were compared with LBM estimated from WB images. A direct method and an indirect method were developed and validated on an independent data set. Comparing LBM estimated from LC examinations with estimates from WB examinations (LBMWB) showed a significant but limited bias of 1.2 kg (direct method) and nonsignificant bias of 0.05 kg (indirect method). This study demonstrates that LBM can be estimated from LC CT images with no significant difference from LBMWB.

  17. True versus Apparent Malaria Infection Prevalence: The Contribution of a Bayesian Approach

    PubMed Central

    Claes, Filip; Van Hong, Nguyen; Torres, Kathy; Mao, Sokny; Van den Eede, Peter; Thi Thinh, Ta; Gamboa, Dioni; Sochantha, Tho; Thang, Ngo Duc; Coosemans, Marc; Büscher, Philippe; D'Alessandro, Umberto; Berkvens, Dirk; Erhart, Annette

    2011-01-01

    Aims To present a new approach for estimating the “true prevalence” of malaria and apply it to datasets from Peru, Vietnam, and Cambodia. Methods Bayesian models were developed for estimating both the malaria prevalence using different diagnostic tests (microscopy, PCR & ELISA), without the need of a gold standard, and the tests' characteristics. Several sources of information, i.e. data, expert opinions and other sources of knowledge can be integrated into the model. This approach resulting in an optimal and harmonized estimate of malaria infection prevalence, with no conflict between the different sources of information, was tested on data from Peru, Vietnam and Cambodia. Results Malaria sero-prevalence was relatively low in all sites, with ELISA showing the highest estimates. The sensitivity of microscopy and ELISA were statistically lower in Vietnam than in the other sites. Similarly, the specificities of microscopy, ELISA and PCR were significantly lower in Vietnam than in the other sites. In Vietnam and Peru, microscopy was closer to the “true” estimate than the other 2 tests while as expected ELISA, with its lower specificity, usually overestimated the prevalence. Conclusions Bayesian methods are useful for analyzing prevalence results when no gold standard diagnostic test is available. Though some results are expected, e.g. PCR more sensitive than microscopy, a standardized and context-independent quantification of the diagnostic tests' characteristics (sensitivity and specificity) and the underlying malaria prevalence may be useful for comparing different sites. Indeed, the use of a single diagnostic technique could strongly bias the prevalence estimation. This limitation can be circumvented by using a Bayesian framework taking into account the imperfect characteristics of the currently available diagnostic tests. As discussed in the paper, this approach may further support global malaria burden estimation initiatives. PMID:21364745

  18. Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects

    PubMed Central

    Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose

    2017-01-01

    Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257

  19. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  20. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    PubMed

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  1. Estimating dermal transfer from PCB-contaminated porous surfaces.

    PubMed

    Slayton, T M; Valberg, P A; Wait, A D

    1998-06-01

    Health risks posed by dermal contact with PCB-contaminated porous surfaces have not been directly demonstrated and are difficult to estimate indirectly. Surface contamination by organic compounds is commonly assessed by collecting wipe samples with hexane as the solvent. However, for porous surfaces, hexane wipe characterization is of limited direct use when estimating potential human exposure. Particularly for porous surfaces, the relationship between the amount of organic material collected by hexane and the amount actually picked up by, for example, a person's hand touch is unknown. To better mimic PCB pickup by casual hand contact with contaminated concrete surfaces, we used alternate solvents and wipe application methods that more closely mimic casual dermal contact. Our sampling results were compared to PCB pickup using hexane-wetted wipes and the standard rubbing protocol. Dry and oil-wetted samples, applied without rubbing, picked up less than 1% of the PCBs picked up by the standard hexane procedure; with rubbing, they picked up about 2%. Without rubbing, saline-wetted wipes picked up 2.5%; with rubbing, they picked up about 12%. While the nature of dermal contact with a contaminated surface cannot be perfectly reproduced with a wipe sample, our results with alternate wiping solvents and rubbing methods more closely mimic hand contact than the standard hexane wipe protocol. The relative pickup estimates presented in this paper can be used in conjunction with site-specific PCB hexane wipe results to estimate dermal pickup rates at sites with PCB-contaminated concrete.

  2. Spectrophotometric estimation of ambroxol and cetirizine hydrochloride from tablet dosage form.

    PubMed

    Gowekar, N M; Pande, V V; Kasture, A V; Tekade, A R; Chandorkar, J G

    2007-07-01

    Fixed dose combination tablets containing ambroxol HCl and cetirizine HCl are clinically used as mucolytic and antiallergic. Several spectrophotometric and HPLC methods have been reported for simultaneous estimation of these drugs with other drugs. The drugs individually and in mixture obeys Beer's law over conc. range 1.2-4.4 microg/mL for cetirizine HCL and for ambroxol HCL 15-52 microg/mL at all five sampling wavelengths (correlation coeff. well above 0.995). The mean recoveries from tablet by standard addition method were 100.18% (+/-2.4) and 100.66 % (+/-2.31). The present work reports simple, accurate and precise spectrophotometric methods for their simultaneous estimation from tablet dosage form.

  3. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  4. A standards-based method for compositional analysis by energy dispersive X-ray spectrometry using multivariate statistical analysis: application to multicomponent alloys.

    PubMed

    Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W

    2013-02-01

    Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.

  5. Molecular pathology and age estimation.

    PubMed

    Meissner, Christoph; Ritz-Timme, Stefanie

    2010-12-15

    Over the course of our lifetime a stochastic process leads to gradual alterations of biomolecules on the molecular level, a process that is called ageing. Important changes are observed on the DNA-level as well as on the protein level and are the cause and/or consequence of our 'molecular clock', influenced by genetic as well as environmental parameters. These alterations on the molecular level may aid in forensic medicine to estimate the age of a living person, a dead body or even skeletal remains for identification purposes. Four such important alterations have become the focus of molecular age estimation in the forensic community over the last two decades. The age-dependent accumulation of the 4977bp deletion of mitochondrial DNA and the attrition of telomeres along with ageing are two important processes at the DNA-level. Among a variety of protein alterations, the racemisation of aspartic acid and advanced glycation endproducs have already been tested for forensic applications. At the moment the racemisation of aspartic acid represents the pinnacle of molecular age estimation for three reasons: an excellent standardization of sampling and methods, an evaluation of different variables in many published studies and highest accuracy of results. The three other mentioned alterations often lack standardized procedures, published data are sparse and often have the character of pilot studies. Nevertheless it is important to evaluate molecular methods for their suitability in forensic age estimation, because supplementary methods will help to extend and refine accuracy and reliability of such estimates. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Estimating Achievement Gaps from Test Scores Reported in Ordinal "Proficiency" Categories

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Reardon, Sean F.

    2012-01-01

    Test scores are commonly reported in a small number of ordered categories. Examples of such reporting include state accountability testing, Advanced Placement tests, and English proficiency tests. This paper introduces and evaluates methods for estimating achievement gaps on a familiar standard-deviation-unit metric using data from these ordered…

  7. Relationships between acoustic variables and different measures of stiffness in standing Pinus taeda trees

    Treesearch

    Christian R. Mora; Laurence R. Schimleck; Fikret Isik; Jerry M. Mahon Jr.; Alexander Clark III; Richard F. Daniels

    2009-01-01

    Acoustic tools are increasingly used to estimate standing-tree (dynamic) stiffness; however, such techniques overestimate static stiffness, the standard measurement for determining modulus of elasticity (MOE) of wood. This study aimed to identify correction methods for standing-tree estimates making dynamic and static stiffness comparable. Sixty Pinus taeda L...

  8. A Standard Greenhouse Method for Assessing Soybean Cyst Nematode Resistance in Soybean: SCE08 (Standardized Cyst Evaluation 2008)

    USDA-ARS?s Scientific Manuscript database

    The soybean cyst nematode (SCN), Heterodera glycines Ichinohe, is distributed throughout the soybean (Glycine max [L.] Merr.) production areas of the United States and Canada. SCN remains the most economically important pathogen of soybean in North America; the most recent estimate of soybean yield...

  9. Premixed Digestion Salts for Kjeldahl Determination of Total Nitrogen in Selected Forest Soils

    Treesearch

    B. G. Blackmon

    1971-01-01

    Estimates of total soil nitrogen by a standard Kjeldahl procedure and a modified procedure employing packets of premixed digestion salts were closely correlated. (r2 = 0.983). The modified procedure appears to be as reliable all the standard method for determining total nitrogen in southern alluvial forest soils.

  10. Bayesian averaging over Decision Tree models for trauma severity scoring.

    PubMed

    Schetinin, V; Jakaite, L; Krzanowski, W

    2018-01-01

    Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Gravimetric Analysis of Particulate Matter using Air Samplers Housing Internal Filtration Capsules.

    PubMed

    O'Connor, Sean; O'Connor, Paula Fey; Feng, H Amy; Ashley, Kevin

    2014-10-01

    An evaluation was carried out to investigate the suitability of polyvinyl chloride (PVC) internal capsules, housed within air sampling devices, for gravimetric analysis of airborne particles collected in workplaces. Experiments were carried out using blank PVC capsules and PVC capsules spiked with 0,1 - 4 mg of National Institute of Standards and Technology Standard Reference Material ® (NIST SRM) 1648 (Urban Particulate Matter) and Arizona Road Dust (Air Cleaner Test Dust). The capsules were housed within plastic closed-face cassette samplers (CFCs). A method detection limit (MDL) of 0,075 mg per sample was estimated. Precision S r at 0,5 - 4 mg per sample was 0,031 and the estimated bias was 0,058. Weight stability over 28 days was verified for both blanks and spiked capsules. Independent laboratory testing on blanks and field samples verified long-term weight stability as well as sampling and analysis precision and bias estimates. An overall precision estimate Ŝ rt of 0,059 was obtained. An accuracy measure of ±15,5% was found for the gravimetric method using PVC internal capsules.

  12. Gravimetric Analysis of Particulate Matter using Air Samplers Housing Internal Filtration Capsules

    PubMed Central

    O'Connor, Sean; O'Connor, Paula Fey; Feng, H. Amy

    2015-01-01

    Summary An evaluation was carried out to investigate the suitability of polyvinyl chloride (PVC) internal capsules, housed within air sampling devices, for gravimetric analysis of airborne particles collected in workplaces. Experiments were carried out using blank PVC capsules and PVC capsules spiked with 0,1 – 4 mg of National Institute of Standards and Technology Standard Reference Material® (NIST SRM) 1648 (Urban Particulate Matter) and Arizona Road Dust (Air Cleaner Test Dust). The capsules were housed within plastic closed-face cassette samplers (CFCs). A method detection limit (MDL) of 0,075 mg per sample was estimated. Precision Sr at 0,5 - 4 mg per sample was 0,031 and the estimated bias was 0,058. Weight stability over 28 days was verified for both blanks and spiked capsules. Independent laboratory testing on blanks and field samples verified long-term weight stability as well as sampling and analysis precision and bias estimates. An overall precision estimate Ŝrt of 0,059 was obtained. An accuracy measure of ±15,5% was found for the gravimetric method using PVC internal capsules. PMID:26435581

  13. Comparison of method using phase-sensitive motion estimator with speckle tracking method and application to measurement of arterial wall motion

    NASA Astrophysics Data System (ADS)

    Miyajo, Akira; Hasegawa, Hideyuki

    2018-07-01

    At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.

  14. A wet chemical method for the estimation of carbon in uranium carbides.

    PubMed

    Chandramouli, V; Yadav, R B; Rao, P R

    1987-09-01

    A wet chemical method for the estimation of carbon in uranium carbides has been developed, based on oxidation with a saturated solution of sodium dichromate in 9M sulphuric acid, absorption of the evolved carbon dioxide in a known excess of barium hydroxide solution, and titration of the excess of barium hydroxide with standard potassium hydrogen phthalate solution. The carbon content obtained is in good agreement with that obtained by combustion and titration.

  15. Acetaminophen in serum and plasma estimated by high-pressure liquid chromatography: a micro-scale method.

    PubMed

    Blair, D; Rumack, B H

    1977-01-01

    We describe a capillary-sampling method for serum or plasma acetaminophen by cation-exchange chromatography. As little as 1.5 mul of plasma or serum and an equal volume of the internal standard (N-butyryl-p-aminophenol) were run, with a precision of +/- 5% between duplicates. Acetaminophen and the internal standard chromatographed in 32 and 50 min, respectively, distinct from intrinsic plasma peaks and peaks caused by other medications.

  16. A MIMO radar quadrature and multi-channel amplitude-phase error combined correction method based on cross-correlation

    NASA Astrophysics Data System (ADS)

    Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan

    2018-04-01

    Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.

  17. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  18. In Situ Catalytic Groundwater Treatment Using Pd-Catalysts and Horizontal Flow Treatment Wells

    DTIC Science & Technology

    2007-02-01

    1,2,4-Trichlorobenzene Hexachlorobutadiene Naphthalene 1,2,3-Trichlorobenzene Internal Standards Fluorobenzene 2-Bromo- 1 - chloropropane a Retention...internal standard method using a purge-and-trap. Internal standards were: Fluorobenzene for PID, 2-Bromo- 1 - chloropropane for HECD. b Detector does not...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  19. A one-step method for modelling longitudinal data with differential equations.

    PubMed

    Hu, Yueqin; Treinen, Raymond

    2018-04-06

    Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.

  20. What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries

    PubMed Central

    Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B

    2011-01-01

    Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332

  1. Credit risk migration rates modeling as open systems: A micro-simulation approach

    NASA Astrophysics Data System (ADS)

    Landini, S.; Uberti, M.; Casellina, S.

    2018-05-01

    The last financial crisis of 2008 stimulated the development of new Regulatory Criteria (commonly known as Basel III) that pushed the banking activity to become more prudential, either in the short and the long run. As well known, in 2014 the International Accounting Standards Board (IASB) promulgated the new International Financial Reporting Standard 9 (IFRS 9) for financial instruments that will become effective in January 2018. Since the delayed recognition of credit losses on loans was identified as a weakness in existing accounting standards, the IASB has introduced an Expected Loss model that requires more timely recognition of credit losses. Specifically, new standards require entities to account both for expected losses from when the impairments are recognized for the first time and for full loan lifetime; moreover, a clear preference toward forward looking models is expressed. In this new framework, it is necessary a re-thinking of the widespread standard theoretical approach on which the well known prudential model is founded. The aim of this paper is then to define an original methodological approach to migration rates modeling for credit risk which is innovative respect to the standard method from the point of view of a bank as well as in a regulatory perspective. Accordingly, the proposed not-standard approach considers a portfolio as an open sample allowing for entries, migrations of stayers and exits as well. While being consistent with the empirical observations, this open-sample approach contrasts with the standard closed-sample method. In particular, this paper offers a methodology to integrate the outcomes of the standard closed-sample method within the open-sample perspective while removing some of the assumptions of the standard method. Three main conclusions can be drawn in terms of economic capital provision: (a) based on the Markovian hypothesis with a-priori absorbing state at default, the standard closed-sample method is to be abandoned for not to predict lenders' bankruptcy by construction; (b) to meet more reliable estimates along with the new regulatory standards, the sample to estimate migration rates matrices for credit risk should include either entries and exits; (c) the static eigen-decomposition standard procedure to forecast migration rates should be replaced with a stochastic process dynamics methodology while conditioning forecasts to macroeconomic scenarios.

  2. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  3. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  4. A Synthetic Comparator Approach to Local Evaluation of School-Based Substance Use Prevention Programming.

    PubMed

    Hansen, William B; Derzon, James H; Reese, Eric L

    2014-06-01

    We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.

  5. Groundwater Evapotranspiration from Diurnal Water Table Fluctuation: a Modified White Based Method Using Drainable and Fillable Porosity

    NASA Astrophysics Data System (ADS)

    Acharya, S.; Mylavarapu, R.; Jawitz, J. W.

    2012-12-01

    In shallow unconfined aquifers, the water table usually shows a distinct diurnal fluctuation pattern corresponding to the twenty-four hour solar radiation cycle. This diurnal water table fluctuation (DWTF) signal can be used to estimate the groundwater evapotranspiration (ETg) by vegetation, a method known as the White [1932] method. Water table fluctuations in shallow phreatic aquifers is controlled by two distinct storage parameters, drainable porosity (or specific yield) and the fillable porosity. Yet, it is implicitly assumed in most studies that these two parameters are equal, unless hysteresis effect is considered. The White based method available in the literature is also based on a single drainable porosity parameter to estimate the ETg. In this study, we present a modification of the White based method to estimate ETg from DWTF using separate drainable (λd) and fillable porosity (λf) parameters. Separate analytical expressions based on successive steady state moisture profiles are used to estimate λd and λf, instead of the commonly employed hydrostatic moisture profile approach. The modified method is then applied to estimate ETg using the DWTF data observed in a field in northeast Florida and the results are compared with ET estimations from the standard Penman-Monteith equation. It is found that the modified method resulted in significantly better estimates of ETg than the previously available method that used only a single, hydrostatic-moisture-profile based λd. Furthermore, the modified method is also used to estimate ETg even during rainfall events which produced significantly better estimates of ETg as compared to the single λd parameter method.

  6. Quantifying Abdominal Adipose Tissue and Thigh Muscle Volume and Hepatic Proton Density Fat Fraction: Repeatability and Accuracy of an MR Imaging-based, Semiautomated Analysis Method.

    PubMed

    Middleton, Michael S; Haufe, William; Hooker, Jonathan; Borga, Magnus; Dahlqvist Leinhard, Olof; Romu, Thobias; Tunón, Patrik; Hamilton, Gavin; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B

    2017-05-01

    Purpose To determine the repeatability and accuracy of a commercially available magnetic resonance (MR) imaging-based, semiautomated method to quantify abdominal adipose tissue and thigh muscle volume and hepatic proton density fat fraction (PDFF). Materials and Methods This prospective study was institutional review board- approved and HIPAA compliant. All subjects provided written informed consent. Inclusion criteria were age of 18 years or older and willingness to participate. The exclusion criterion was contraindication to MR imaging. Three-dimensional T1-weighted dual-echo body-coil images were acquired three times. Source images were reconstructed to generate water and calibrated fat images. Abdominal adipose tissue and thigh muscle were segmented, and their volumes were estimated by using a semiautomated method and, as a reference standard, a manual method. Hepatic PDFF was estimated by using a confounder-corrected chemical shift-encoded MR imaging method with hybrid complex-magnitude reconstruction and, as a reference standard, MR spectroscopy. Tissue volume and hepatic PDFF intra- and interexamination repeatability were assessed by using intraclass correlation and coefficient of variation analysis. Tissue volume and hepatic PDFF accuracy were assessed by means of linear regression with the respective reference standards. Results Adipose and thigh muscle tissue volumes of 20 subjects (18 women; age range, 25-76 years; body mass index range, 19.3-43.9 kg/m 2 ) were estimated by using the semiautomated method. Intra- and interexamination intraclass correlation coefficients were 0.996-0.998 and coefficients of variation were 1.5%-3.6%. For hepatic MR imaging PDFF, intra- and interexamination intraclass correlation coefficients were greater than or equal to 0.994 and coefficients of variation were less than or equal to 7.3%. In the regression analyses of manual versus semiautomated volume and spectroscopy versus MR imaging, PDFF slopes and intercepts were close to the identity line, and correlations of determination at multivariate analysis (R 2 ) ranged from 0.744 to 0.994. Conclusion This MR imaging-based, semiautomated method provides high repeatability and accuracy for estimating abdominal adipose tissue and thigh muscle volumes and hepatic PDFF. © RSNA, 2017.

  7. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  8. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    PubMed

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying Observations and Missing Data: Methods and Software.

    PubMed

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-06-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.

  10. Nonlinear features for classification and pose estimation of machined parts from single views

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-10-01

    A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.

  11. A new class of methods for functional connectivity estimation

    NASA Astrophysics Data System (ADS)

    Lin, Wutu

    Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.

  12. Standards for documenting and monitoring bird reintroduction projects

    USGS Publications Warehouse

    Sutherland, W.J.; Armstrong, D.; Butchart, S.H.M.; Earnhardt, J.M.; Ewen, J.; Jamieson, I.; Jones, C.G.; Lee, R.; Newbery, P.; Nichols, J.D.; Parker, K.A.; Sarrazin, F.; Seddon, P.J.; Shah, N.; Tatayah, V.

    2010-01-01

    It would be much easier to assess the effectiveness of different reintroduction methods, and so improve the success of reintroductions, if there was greater standardization in documentation of the methods and outcomes. We suggest a series of standards for documenting and monitoring the methods and outcomes associated with reintroduction projects for birds. Key suggestions are: documenting the planned release before it occurs, specifying the information required on each release, postrelease monitoring occurring at standard intervals of 1 and 5 years (and 10 for long-lived species), carrying out a population estimate unless impractical, distinguishing restocked and existing individuals when supplementing populations, and documenting the results. We suggest these principles would apply, largely unchanged, to other vertebrate classes. Similar methods could be adopted for invertebrates and plants with appropriate modification. We suggest that organizations publically state whether they will adopt these approaches when undertaking reintroductions. Similar standardization would be beneficial for a wide range of topics in environmental monitoring, ecological studies, and practical conservation. ??2010 Wiley Periodicals, Inc.

  13. An estimator for the standard deviation of a natural frequency. II.

    NASA Technical Reports Server (NTRS)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.

  14. Evaluating Sleep Disturbance: A Review of Methods

    NASA Technical Reports Server (NTRS)

    Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.

  15. Standard setting: comparison of two methods.

    PubMed

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  16. Measurement properties of gingival biotype evaluation methods.

    PubMed

    Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga

    2018-06-01

    There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P < .05). The criterion validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.

  17. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.

  18. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  19. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  20. A geographic information system-based method for estimating cancer rates in non-census defined geographical areas.

    PubMed

    Freeman, Vincent L; Boylan, Emma E; Pugach, Oksana; Mclafferty, Sara L; Tossas-Milligan, Katherine Y; Watson, Karriem S; Winn, Robert A

    2017-10-01

    To address locally relevant cancer-related health issues, health departments frequently need data beyond that contained in standard census area-based statistics. We describe a geographic information system-based method for calculating age-standardized cancer incidence rates in non-census defined geographical areas using publically available data. Aggregated records of cancer cases diagnosed from 2009 through 2013 in each of Chicago's 77 census-defined community areas were obtained from the Illinois State Cancer Registry. Areal interpolation through dasymetric mapping of census blocks was used to redistribute populations and case counts from community areas to Chicago's 50 politically defined aldermanic wards, and ward-level age-standardized 5-year cumulative incidence rates were calculated. Potential errors in redistributing populations between geographies were limited to <1.5% of the total population, and agreement between our ward population estimates and those from a frequently cited reference set of estimates was high (Pearson correlation r = 0.99, mean difference = -4 persons). A map overlay of safety-net primary care clinic locations and ward-level incidence rates for advanced-staged cancers revealed potential pathways for prevention. Areal interpolation through dasymetric mapping can estimate cancer rates in non-census defined geographies. This can address gaps in local cancer-related health data, inform health resource advocacy, and guide community-centered cancer prevention and control.

  1. Lower limb estimation from sparse landmarks using an articulated shape model.

    PubMed

    Zhang, Ju; Fernandez, Justin; Hislop-Jambrich, Jacqui; Besier, Thor F

    2016-12-08

    Rapid generation of lower limb musculoskeletal models is essential for clinically applicable patient-specific gait modeling. Estimation of muscle and joint contact forces requires accurate representation of bone geometry and pose, as well as their muscle attachment sites, which define muscle moment arms. Motion-capture is a routine part of gait assessment but contains relatively sparse geometric information. Standard methods for creating customized models from motion-capture data scale a reference model without considering natural shape variations. We present an articulated statistical shape model of the left lower limb with embedded anatomical landmarks and muscle attachment regions. This model is used in an automatic workflow, implemented in an easy-to-use software application, that robustly and accurately estimates realistic lower limb bone geometry, pose, and muscle attachment regions from seven commonly used motion-capture landmarks. Estimated bone models were validated on noise-free marker positions to have a lower (p=0.001) surface-to-surface root-mean-squared error of 4.28mm, compared to 5.22mm using standard isotropic scaling. Errors at a variety of anatomical landmarks were also lower (8.6mm versus 10.8mm, p=0.001). We improve upon standard lower limb model scaling methods with shape model-constrained realistic bone geometries, regional muscle attachment sites, and higher accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  3. Can children with Type 1 diabetes and their caregivers estimate the carbohydrate content of meals and snacks?

    PubMed

    Smart, C E; Ross, K; Edge, J A; King, B R; McElduff, P; Collins, C E

    2010-03-01

    Carbohydrate (CHO) counting allows children with Type 1 diabetes to adjust mealtime insulin dose to carbohydrate intake. Little is known about the ability of children to count CHO and whether a particular method for assessing CHO quantity is better than others. We investigated how accurately children and their caregivers estimate carbohydrate, and whether counting in gram increments improves accuracy compared with CHO portions or exchanges. One hundred and two children and adolescents (age range 8.3-18.1 years) on intensive insulin therapy and 110 caregivers independently estimated the CHO content of 17 standardized meals (containing 8-90 g CHO), using whichever method of carbohydrate quantification they had been taught (gram increments, 10-g portions or 15-g exchanges). Seventy-three per cent (n = 2530) of all estimates were within 10-15 g of actual CHO content. There was no relationship between the mean percentage error and method of carbohydrate counting or glycated haemoglobin (HbA(1c)) (P > 0.05). Mean gram error and meal size were negatively correlated (r = -0.70, P < 0.0001). The longer children had been CHO counting the greater the mean percentage error (r = 0.173, P = 0.014). Core foods in non-standard quantities were most frequently inaccurately estimated, while individually labelled foods were most often accurately estimated. Children with Type 1 diabetes and their caregivers can estimate the carbohydrate content of meals with reasonable accuracy. Teaching CHO counting in gram increments did not improve accuracy compared with CHO portions or exchanges. Large meals tended to be underestimated and snacks overestimated. Repeated age-appropriate education appears necessary to maintain accuracy in carbohydrate estimations.

  4. Measuring food intake with digital photography

    USDA-ARS?s Scientific Manuscript database

    The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. With this method, images of food selection and leftovers are quickly captured in the cafeteria. These images are later compared with images of 'standard' portions of food using computer...

  5. A refined method for multivariate meta-analysis and meta-regression

    PubMed Central

    Jackson, Daniel; Riley, Richard D

    2014-01-01

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351

  6. Standardization of HPTLC method for the estimation of oxytocin in edibles.

    PubMed

    Rani, Roopa; Medhe, Sharad; Raj, Kumar Rohit; Srivastava, Manmohan

    2013-12-01

    Adulteration in food stuff has been regarded as a major social evil and is a mind-boggling problem in society. In this study, a rapid, reliable and cost effective High Performance thin layer Chromatography (HPTLC) has been established for the estimation of oxytocin (adulterant) in vegetables, fruits and milk samples. Oxytocin is one of the most frequently used adulterant added in vegetables and fruits for increasing the growth rate and also to enhance milk production from lactating animals. The standardization of the method was based on simulation parameters of mobile phase, stationary phase and saturation time. The mobile phase used was MeOH: Ammonia (pH 6.8), optimized stationary phase was silica gel and saturation time of 5 min. The method was validated by testing its linearity, accuracy, precision, repeatability and limits of detection and quantification. Thus, the proposed method is simple, rapid and specific and was successfully employed for quality and quantity monitoring of oxytocin content in edible products.

  7. Endoscope field of view measurement.

    PubMed

    Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen

    2017-03-01

    The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOV WS method) in the current ISO 8600-3 standard and proposed a new method (the FOV EP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOV EP method was more accurate than the FOV WS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard.

  8. Endoscope field of view measurement

    PubMed Central

    Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen

    2017-01-01

    The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOVWS method) in the current ISO 8600-3 standard and proposed a new method (the FOVEP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOVEP method was more accurate than the FOVWS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard. PMID:28663840

  9. Correlation between cystatin C-based formulas, Schwartz formula and urinary creatinine clearance for glomerular filtration rate estimation in children with kidney disease.

    PubMed

    Safaei-Asl, Afshin; Enshaei, Mercede; Heydarzadeh, Abtin; Maleknejad, Shohreh

    2016-01-01

    Assessment of glomerular filtration rate (GFR) is an important tool for monitoring renal function. Regarding to limitations in available methods, we intended to calculate GFR by cystatin C (Cys C) based formulas and determine correlation rate of them with current methods. We studied 72 children (38 boys and 34 girls) with renal disorders. The 24 hour urinary creatinine (Cr) clearance was the gold standard method. GFR was measured with Schwartz formula and Cys C-based formulas (Grubb, Hoek, Larsson and Simple). Then correlation rates of these formulas were determined. Using Pearson correlation coefficient, a significant positive correlation between all formulas and the standard method was seen (R(2) for Schwartz, Hoek, Larsson, Grubb and Simple formula was 0.639, 0.722, 0.705, 0.712, 0.722, respectively) (P<0.001). Cys C-based formulas could predict the variance of standard method results with high power. These formulas had correlation with Schwarz formula by R(2) 0.62-0.65 (intermediate correlation). Using linear regression and constant (y-intercept), it revealed that Larsson, Hoek and Grubb formulas can estimate GFR amounts with no statistical difference compared with standard method; but Schwartz and Simple formulas overestimate GFR. This study shows that Cys C-based formulas have strong relationship with 24 hour urinary Cr clearance. Hence, they can determine GFR in children with kidney injury, easier and with enough accuracy. It helps the physician to diagnosis of renal disease in early stages and improves the prognosis.

  10. Methods for estimating streamflow at mountain fronts in southern New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1994-01-01

    The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.

  11. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)

  12. A Procedure for Calculating the Vertical Space Height of the Sacrum When Determining Skeletal Height for Use in the Anatomical Method of Adult Stature Estimation.

    PubMed

    Hayashi, Atsuko; Emanovsky, Paul D; Pietrusewsky, Michael; Holland, Thomas D

    2016-03-01

    Estimating stature from skeletonized remains is one of the essential parameters in the development of a biological profile. A new procedure for determining skeletal height (SKH) incorporating the vertical space height (VSH) from the anterior margin of the sacral promontory to the superior margins of the acetabulae for use in the anatomical method of stature estimation is introduced. Regression equations for stature estimation were generated from measurements of 38 American males of European ancestry from the William M. Bass Donated Skeletal Collection. The modification to the procedure results in a SKH that is highly correlated with stature (r = 0.925-0.948). Stature estimates have low standard errors of the estimate ranging from 21.79 to 25.95 mm, biases from to 0.50 to 0.94 mm, and accuracy rates from 17.71 mm to 19.45 mm. The procedure for determining the VSH, which replaces "S1 height" in traditional anatomical method models, is a key improvement to the method. © 2016 American Academy of Forensic Sciences.

  13. Practical considerations for estimating clinical trial accrual periods: application to a multi-center effectiveness study

    PubMed Central

    Carter, Rickey E; Sonne, Susan C; Brady, Kathleen T

    2005-01-01

    Background Adequate participant recruitment is vital to the conduct of a clinical trial. Projected recruitment rates are often over-estimated, and the time to recruit the target population (accrual period) is often under-estimated. Methods This report illustrates three approaches to estimating the accrual period and applies the methods to a multi-center, randomized, placebo controlled trial undergoing development. Results Incorporating known sources of accrual variation can yield a more justified estimate of the accrual period. Simulation studies can be incorporated into a clinical trial's planning phase to provide estimates for key accrual summaries including the mean and standard deviation of the accrual period. Conclusion The accrual period of a clinical trial should be carefully considered, and the allocation of sufficient time for participant recruitment is a fundamental aspect of planning a clinical trial. PMID:15796782

  14. Motion direction estimation based on active RFID with changing environment

    NASA Astrophysics Data System (ADS)

    Jie, Wu; Minghua, Zhu; Wei, He

    2018-05-01

    The gate system is used to estimate the direction of RFID tags carriers when they are going through the gate. Normally, it is difficult to achieve and keep a high accuracy in estimating motion direction of RFID tags because the received signal strength of tag changes sharply according to the changing electromagnetic environment. In this paper, a method of motion direction estimation for RFID tags is presented. To improve estimation accuracy, the machine leaning algorithm is used to get the fitting function of the received data by readers which are deployed inside and outside gate respectively. Then the fitted data are sampled to get the standard vector. We compare the stand vector with template vectors to get the motion direction estimation result. Then the corresponding template vector is updated according to the surrounding environment. We conducted the simulation and implement of the proposed method and the result shows that the proposed method in this work can improve and keep a high accuracy under the condition of the constantly changing environment.

  15. Improved localisation of neoclassical tearing modes by combining multiple diagnostic estimates

    NASA Astrophysics Data System (ADS)

    Rapson, C. J.; Fischer, R.; Giannone, L.; Maraschek, M.; Reich, M.; Treutterer, W.; The ASDEX Upgrade Team

    2017-07-01

    Neoclassical tearing modes (NTMs) strongly degrade confinement in tokamaks, and are a leading cause of disruptions. They can be stabilised by targeted electron cyclotron current drive (ECCD), however the effectiveness of ECCD depends strongly on the accuracy or misalignment between ECCD and the NTM. The first step to ensure minimal misalignment is a good estimate of the NTM location. In previous NTM control experiments, three methods have been used independently to estimate the NTM location: the magnetic equilibrium, correlation between magnetic and spatially-resolved temperature fluctuations, and the amplitude response of the NTM to nearby ECCD. This submission describes an algorithm which has been designed to fuse these three estimates into one, taking into account many of the characteristics of each diagnostic. Although the method diverges from standard data fusion methods, results from simulation and experiment confirm that the algorithm achieves its stated goal of providing an estimate that is more reliable and accurate than any of the individual estimates.

  16. Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.

    PubMed

    Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat

    2017-07-01

    Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Quantifying the measurement uncertainty of results from environmental analytical methods.

    PubMed

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  18. Linking pesticides and human health: a geographic information system (GIS) and Landsat remote sensing method to estimate agricultural pesticide exposure.

    PubMed

    VoPham, Trang; Wilson, John P; Ruddell, Darren; Rashed, Tarek; Brooks, Maria M; Yuan, Jian-Min; Talbott, Evelyn O; Chang, Chung-Chou H; Weissfeld, Joel L

    2015-08-01

    Accurate pesticide exposure estimation is integral to epidemiologic studies elucidating the role of pesticides in human health. Humans can be exposed to pesticides via residential proximity to agricultural pesticide applications (drift). We present an improved geographic information system (GIS) and remote sensing method, the Landsat method, to estimate agricultural pesticide exposure through matching pesticide applications to crops classified from temporally concurrent Landsat satellite remote sensing images in California. The image classification method utilizes Normalized Difference Vegetation Index (NDVI) values in a combined maximum likelihood classification and per-field (using segments) approach. Pesticide exposure is estimated according to pesticide-treated crop fields intersecting 500 m buffers around geocoded locations (e.g., residences) in a GIS. Study results demonstrate that the Landsat method can improve GIS-based pesticide exposure estimation by matching more pesticide applications to crops (especially temporary crops) classified using temporally concurrent Landsat images compared to the standard method that relies on infrequently updated land use survey (LUS) crop data. The Landsat method can be used in epidemiologic studies to reconstruct past individual-level exposure to specific pesticides according to where individuals are located.

  19. Validation of Ion Chromatographic Method for Determination of Standard Inorganic Anions in Treated and Untreated Drinking Water

    NASA Astrophysics Data System (ADS)

    Ivanova, V.; Surleva, A.; Koleva, B.

    2018-06-01

    An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.

  20. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  1. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  2. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  3. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  4. Non-structural carbohydrates in woody plants compared among laboratories.

    PubMed

    Quentin, Audrey G; Pinkard, Elizabeth A; Ryan, Michael G; Tissue, David T; Baggett, L Scott; Adams, Henry D; Maillard, Pascale; Marchand, Jacqueline; Landhäusser, Simon M; Lacointe, André; Gibon, Yves; Anderegg, William R L; Asao, Shinichi; Atkin, Owen K; Bonhomme, Marc; Claye, Caroline; Chow, Pak S; Clément-Vidal, Anne; Davies, Noel W; Dickman, L Turin; Dumbur, Rita; Ellsworth, David S; Falk, Kristen; Galiano, Lucía; Grünzweig, José M; Hartmann, Henrik; Hoch, Günter; Hood, Sharon; Jones, Joanna E; Koike, Takayoshi; Kuhlmann, Iris; Lloret, Francisco; Maestro, Melchor; Mansfield, Shawn D; Martínez-Vilalta, Jordi; Maucourt, Mickael; McDowell, Nathan G; Moing, Annick; Muller, Bertrand; Nebauer, Sergio G; Niinemets, Ülo; Palacio, Sara; Piper, Frida; Raveh, Eran; Richter, Andreas; Rolland, Gaëlle; Rosas, Teresa; Saint Joanis, Brigitte; Sala, Anna; Smith, Renee A; Sterck, Frank; Stinziano, Joseph R; Tobias, Mari; Unda, Faride; Watanabe, Makoto; Way, Danielle A; Weerasinghe, Lasantha K; Wild, Birgit; Wiley, Erin; Woodruff, David R

    2015-11-01

    Non-structural carbohydrates (NSC) in plant tissue are frequently quantified to make inferences about plant responses to environmental conditions. Laboratories publishing estimates of NSC of woody plants use many different methods to evaluate NSC. We asked whether NSC estimates in the recent literature could be quantitatively compared among studies. We also asked whether any differences among laboratories were related to the extraction and quantification methods used to determine starch and sugar concentrations. These questions were addressed by sending sub-samples collected from five woody plant tissues, which varied in NSC content and chemical composition, to 29 laboratories. Each laboratory analyzed the samples with their laboratory-specific protocols, based on recent publications, to determine concentrations of soluble sugars, starch and their sum, total NSC. Laboratory estimates differed substantially for all samples. For example, estimates for Eucalyptus globulus leaves (EGL) varied from 23 to 116 (mean = 56) mg g(-1) for soluble sugars, 6-533 (mean = 94) mg g(-1) for starch and 53-649 (mean = 153) mg g(-1) for total NSC. Mixed model analysis of variance showed that much of the variability among laboratories was unrelated to the categories we used for extraction and quantification methods (method category R(2) = 0.05-0.12 for soluble sugars, 0.10-0.33 for starch and 0.01-0.09 for total NSC). For EGL, the difference between the highest and lowest least squares means for categories in the mixed model analysis was 33 mg g(-1) for total NSC, compared with the range of laboratory estimates of 596 mg g(-1). Laboratories were reasonably consistent in their ranks of estimates among tissues for starch (r = 0.41-0.91), but less so for total NSC (r = 0.45-0.84) and soluble sugars (r = 0.11-0.83). Our results show that NSC estimates for woody plant tissues cannot be compared among laboratories. The relative changes in NSC between treatments measured within a laboratory may be comparable within and between laboratories, especially for starch. To obtain comparable NSC estimates, we suggest that users can either adopt the reference method given in this publication, or report estimates for a portion of samples using the reference method, and report estimates for a standard reference material. Researchers interested in NSC estimates should work to identify and adopt standard methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Method for estimating power outages and restoration during natural and man-made events

    DOEpatents

    Omitaomu, Olufemi A.; Fernandez, Steven J.

    2016-01-05

    A method of modeling electric supply and demand with a data processor in combination with a recordable medium, and for estimating spatial distribution of electric power outages and affected populations. A geographic area is divided into cells to form a matrix. Within the matrix, supply cells are identified as containing electric substations and demand cells are identified as including electricity customers. Demand cells of the matrix are associated with the supply cells as a function of the capacity of each of the supply cells and the proximity and/or electricity demand of each of the demand cells. The method includes estimating a power outage by applying disaster event prediction information to the matrix, and estimating power restoration using the supply and demand cell information of the matrix and standardized and historical restoration information.

  6. Estimation of Qualitative and Quantitative Parameters of Air Cleaning by a Pulsed Corona Discharge Using Multicomponent Standard Mixtures

    NASA Astrophysics Data System (ADS)

    Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.

    2018-05-01

    The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.

  7. Comparison of GPS receiver DCB estimation methods using a GPS network

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Kyu; Park, Jong-Uk; Min Roh, Kyoung; Lee, Sang-Jeong

    2013-07-01

    Two approaches for receiver differential code biases (DCB) estimation using the GPS data obtained from the Korean GPS network (KGN) in South Korea are suggested: the relative and single (absolute) methods. The relative method uses a GPS network, while the single method determines DCBs from a single station only. Their performance was assessed by comparing the receiver DCB values obtained from the relative method with those estimated by the single method. The daily averaged receiver DCBs obtained from the two different approaches showed good agreement for 7 days. The root mean square (RMS) value of those differences is 0.83 nanoseconds (ns). The standard deviation of the receiver DCBs estimated by the relative method was smaller than that of the single method. From these results, it is clear that the relative method can obtain more stable receiver DCBs compared with the single method over a short-term period. Additionally, the comparison between the receiver DCBs obtained by the Korea Astronomy and Space Science Institute (KASI) and those of the IGS Global Ionosphere Maps (GIM) showed a good agreement at 0.3 ns. As the accuracy of DCB values significantly affects the accuracy of ionospheric total electron content (TEC), more studies are needed to ensure the reliability and stability of the estimated receiver DCBs.

  8. Development of a qualification standard for adhesives used in hybrid microcircuits

    NASA Technical Reports Server (NTRS)

    Licari, J. J.; Weigand, B. L.; Soykin, C. A.

    1981-01-01

    Improved qualification standards and test procedures for adhesives used in microelectronic packaging are developed. The test methods in specification for the Selection and Use of Organic Adhesives in Hybrid Microcircuits are reevaluated versus industry and government requirements. Four electrically insulative and four electrically conductive adhesives used in the assembly of hybrid microcircuits are selected to evaluate the proposed revised test methods. An estimate of the cost to perform qualification testing of an adhesive to the requirements of the revised specification is also prepared.

  9. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. How far is it? Distance measurements and their consequences

    NASA Astrophysics Data System (ADS)

    Krełowski, Jacek

    2017-08-01

    Methods of measuring distances to objects in our Milky Way are briefly discussed. They generally base on three principles: of using a standard rod, of standard candle and of column density of interstellar matter. Weak and strong points of these methods are presented. The presence of gray extinction towards some objects is suggested which makes the most universal method of standard candle (spectroscopic parallax) very uncertain. Hard to say whether gray extinc-tion appears only in the form of circumstellar debris discs or is present also in the general interstellar medium. The application of the method of measuring column densities of interstellar gases suggests that the rotation curve of our Milky Way system is rather Keplerian than flat which creates doubts as to whether any Dark Matter halo is present around our Galaxy. It is emphasized that the most universal method, i.e. that of standard candle, used to estimate distances to cosmological objects, may suffer serious errors because of improper subtraction of extinction effects.

  11. Estimating canopy cover from standard forest inventory measurements in western Oregon

    Treesearch

    Anne McIntosh; Andrew Gray; Steven. Garman

    2012-01-01

    Reliable measures of canopy cover are important in the management of public and private forests. However, direct sampling of canopy cover is both labor- and time-intensive. More efficient methods for estimating percent canopy cover could be empirically derived relationships between more readily measured stand attributes and canopy cover or, alternatively, the use of...

  12. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    ERIC Educational Resources Information Center

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  13. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  14. Quantitative PCR Method for Diagnosis of Citrus Bacterial Canker†

    PubMed Central

    Cubero, J.; Graham, J. H.; Gottwald, T. R.

    2001-01-01

    For diagnosis of citrus bacterial canker by PCR, an internal standard is employed to ensure the quality of the DNA extraction and that proper requisites exist for the amplification reaction. The ratio of PCR products from the internal standard and bacterial target is used to estimate the initial bacterial concentration in citrus tissues with lesions. PMID:11375206

  15. Field Evaluation of Portable and Central Site PM Samplers Emphasizing Additive and Differential Mass Concentration Estimates

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) published a National Ambient Air Quality Standard (NAAQS) and the accompanying Federal Reference Method (FRM) for PM10 in 1987. The EPA revised the particle standards and FRM in 1997 to include PM2.5. In 2005, EPA...

  16. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  17. Estimation of surface area concentration of workplace incidental nanoparticles based on number and mass concentrations

    NASA Astrophysics Data System (ADS)

    Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.

    2011-10-01

    Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.

  18. Risk Assessment Methodology Based on the NISTIR 7628 Guidelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R

    2013-01-01

    Earlier work describes computational models of critical infrastructure that allow an analyst to estimate the security of a system in terms of the impact of loss per stakeholder resulting from security breakdowns. Here, we consider how to identify, monitor and estimate risk impact and probability for different smart grid stakeholders. Our constructive method leverages currently available standards and defined failure scenarios. We utilize the National Institute of Standards and Technology (NIST) Interagency or Internal Reports (NISTIR) 7628 as a basis to apply Cyberspace Security Econometrics system (CSES) for comparing design principles and courses of action in making security-related decisions.

  19. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  20. Sources of Biased Inference in Alcohol and Drug Services Research: An Instrumental Variable Approach

    PubMed Central

    Schmidt, Laura A.; Tam, Tammy W.; Larson, Mary Jo

    2012-01-01

    Objective: This study examined the potential for biased inference due to endogeneity when using standard approaches for modeling the utilization of alcohol and drug treatment. Method: Results from standard regression analysis were compared with those that controlled for endogeneity using instrumental variables estimation. Comparable models predicted the likelihood of receiving alcohol treatment based on the widely used Aday and Andersen medical care–seeking model. Data were from the National Epidemiologic Survey on Alcohol and Related Conditions and included a representative sample of adults in households and group quarters throughout the contiguous United States. Results: Findings suggested that standard approaches for modeling treatment utilization are prone to bias because of uncontrolled reverse causation and omitted variables. Compared with instrumental variables estimation, standard regression analyses produced downwardly biased estimates of the impact of alcohol problem severity on the likelihood of receiving care. Conclusions: Standard approaches for modeling service utilization are prone to underestimating the true effects of problem severity on service use. Biased inference could lead to inaccurate policy recommendations, for example, by suggesting that people with milder forms of substance use disorder are more likely to receive care than is actually the case. PMID:22152672

  1. Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images

    NASA Astrophysics Data System (ADS)

    Sohrabi, H.

    2012-07-01

    In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.

  2. Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.

    PubMed

    Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves

    2016-09-01

    This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.

  3. Economic evaluation of posaconazole versus fluconazole or itraconazole in the prevention of invasive fungal infection in high-risk neutropenic patients in Sweden.

    PubMed

    Lundberg, Johan; Höglund, Martin; Björkholm, Magnus; Åkerborg, Örjan

    2014-07-01

    In patients undergoing induction chemotherapy for acute myeloid leukemia (AML) or myelodysplastic syndromes (MDS), posaconazole has been proven more effective in the prevention of invasive fungal infection (IFI) than fluconazole or itraconazole (standard azoles) The current analysis seeks to estimate the cost effectiveness of prophylactic posaconazole compared with standard azoles in AML or MDS patients with severe chemotherapy-induced neutropenia in Sweden. A decision-analytic model was used to estimate life expectancy, costs, and quality-adjusted life-years (QALYs). Efficacy data were derived from a phase III clinical trial. Life expectancy and quality of life data were collected from the literature. A modified Delphi method was used to gather expert opinion on resource use for an IFI. Unit costs were captured from hospital and pharmacy pricelists. A probabilistic sensitivity analysis (PSA) was used to investigate the impact of uncertainty in the model parameters on the cost-effectiveness results. The estimated mean direct cost per patient with posaconazole prophylaxis was 46,893 Swedish kronor (SEK) (€5,387) and SEK50,017 (€5,746) with standard azoles. Prophylaxis with posaconazole resulted in 0.075 QALYs gained compared with standard azoles. At a cost-effectiveness threshold of SEK500,000/QALY the PSA demonstrated a more than 95 % probability that posaconazole is cost effective versus standard azoles for the prevention of IFI in high-risk neutropenic patients in Sweden. Given the assumptions, methods, and data used, posaconazole is expected to be cost effective compared with standard azoles when used as antifungal prophylaxis in AML or MDS patients with chemotherapy-induced prolonged neutropenia in Sweden.

  4. A method for direct measurement of the first-order mass moments of human body segments.

    PubMed

    Fujii, Yusaku; Shimada, Kazuhito; Maru, Koichi; Ozawa, Junichi; Lu, Rong-Sheng

    2010-01-01

    We propose a simple and direct method for measuring the first-order mass moment of a human body segment. With the proposed method, the first-order mass moment of the body segment can be directly measured by using only one precision scale and one digital camera. In the dummy mass experiment, the relative standard uncertainty of a single set of measurements of the first-order mass moment is estimated to be 1.7%. The measured value will be useful as a reference for evaluating the uncertainty of the body segment inertial parameters (BSPs) estimated using an indirect method.

  5. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  6. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  8. Novel methods to estimate the enantiomeric ratio and the kinetic parameters of enantiospecific enzymatic reactions.

    PubMed

    Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.

    2001-03-08

    1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.

  9. A Study on the Development of Service Quality Index for Incheon International Airport

    NASA Technical Reports Server (NTRS)

    Lee, Kang Seok; Lee, Seung Chang; Hong, Soon Kil

    2003-01-01

    The main purpose of this study is located at developing Ominibus Monitors System(OMS) for internal management, which will enable to establish standards, finding out matters to be improved, and appreciation for its treatment in a systematic way. It is through developing subjective or objective estimation tool with use importance, perceived level, and complex index at international airport by each principal service items. The direction of this study came towards for the purpose of developing a metric analysis tool, utilizing the Quantitative Second Data, Analysing Perceived Data through airport user surveys, systemizing the data collection-input-analysis process, making data image according to graph of results, planning Service Encounter and endowing control attribution, and ensuring competitiveness at the minimal international standards. It is much important to set up a pre-investigation plan on the base of existent foreign literature and actual inspection to international airport. Two tasks have been executed together on the base of this pre-investigation; one is developing subjective estimation standards for departing party, entering party, and airport residence and the other is developing objective standards as complementary methods. The study has processed for the purpose of monitoring services at airports regularly and irregularly through developing software system for operating standards after ensuring credibility and feasibility of estimation standards with substantial and statistical way.

  10. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  11. Standardization of enterococci density estimates by EPA qPCR methods and comparison of beach action value exceedances in river waters with culture methods

    EPA Science Inventory

    The U.S.EPA has published recommendations for calibrator cell equivalent (CCE) densities of enterococci in recreational waters determined by a qPCR method in its 2012 Recreational Water Quality Criteria (RWQC). The CCE quantification unit stems from the calibration model used to ...

  12. Numerical method for high accuracy index of refraction estimation for spectro-angular surface plasmon resonance systems.

    PubMed

    Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G

    2008-11-24

    An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.

  13. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  14. What did you drink yesterday? Public health relevance of a recent recall method used in the 2004 Australian National Drug Strategy Household Survey

    PubMed Central

    Stockwell, Tim; Zhao, Jinhui; Chikritzhs, Tanya; Greenfield, Tom K.

    2009-01-01

    Aim To (i) compare the Yesterday method with other methods of assessing alcohol use applied in the 2004 Australian National Drug Strategy Household Survey (NDSHS) in terms of extent of underreporting of actual consumption assessed from sales data and (ii) illustrate applications of the Yesterday method as a means of variously measuring the size of an Australian “standard drink”, extent of risky/high risk alcohol use, unrecorded alcohol consumption and beverage specific patterns of risk in the general population. Setting The homes of respondents who were eligible and willing to participate. Participants 24,109 Australians aged 12 years and over. Design The 2004 NDSHS assessed drug use, experiences and attitudes using a “drop and collect” self completion questionnaire with random sampling and geographic (State and Territory) and demographic (age and gender) stratification. Measures Self-completion questionnaire using Quantity-Frequency (QF) and Graduated-Frequency (GF) methods plus two questions about consumption ‘yesterday’: one in standard drinks, another with empirically-based estimates of drink size and strength. Results The Yesterday method yielded an estimate of 12.8 g as the amount of ethanol in a typical Australian standard drink (vs. official 10 g). Estimated coverage of the 2003-2004 age 12+ years per capita alcohol consumption in Australia (9.33ml of ethanol) was 69.17% for GF and 64.63% for the QF when assuming a 12.8 g standard drink. Highest coverage of 80.71% was achieved by the detailed Yesterday method. The detailed Yesterday method found that 60.1% of Australian alcohol consumption was above low risk guidelines; 81.5% for 12 to 17-year-olds, 84.8% for 18 to 24-year-olds and 88.8% for Indigenous respondents. Spirit-based drinks and regular strength beer were most likely to be drunk this way, low and mid-strength beer least likely. Conclusions Compared to more widely used methods, the Yesterday method minimized underreporting of overall consumption and provided unique data of public health significance. It also provides an empirical basis for taxing alcoholic beverages in accordance with their contributions to harm and can be used to complement individual level measures such as Quantity Frequency and Graduated Frequency. PMID:18482414

  15. Quantification of amine functional groups and their influence on OM/OC in the IMPROVE network

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Mohammed; Takahama, Satoshi; Dillner, Ann M.

    2018-01-01

    Recently, we developed a method using FT-IR spectroscopy coupled with partial least squares (PLS) regression to measure the four most abundant organic functional groups, aliphatic C-H, alcohol OH, carboxylic acid OH and carbonyl C=O, in atmospheric particulate matter. These functional groups are summed to estimate organic matter (OM) while the carbon from the functional groups is summed to estimate organic carbon (OC). With this method, OM and OM/OC can be estimated for each sample rather than relying on one assumed value to convert OC measurements to OM. This study continues the development of the FT-IR and PLS method for estimating OM and OM/OC by including the amine functional group. Amines are ubiquitous in the atmosphere and come from motor vehicle exhaust, animal husbandry, biomass burning, and vegetation among other sources. In this study, calibration standards for amines are produced by aerosolizing individual amine compounds and collecting them on PTFE filters using an IMPROVE sampler, thereby mimicking the filter media and collection geometry of ambient standards. The moles of amine functional group on each standard and a narrow range of amine-specific wavenumbers in the FT-IR spectra (wavenumber range 1 550-1 500 cm-1) are used to develop a PLS calibration model. The PLS model is validated using three methods: prediction of a set of laboratory standards not included in the model, a peak height analysis and a PLS model with a broader wavenumber range. The model is then applied to the ambient samples collected throughout 2013 from 16 IMPROVE sites in the USA. Urban sites have higher amine concentrations than most rural sites, but amine functional groups account for a lower fraction of OM at urban sites. Amine concentrations, contributions to OM and seasonality vary by site and sample. Amine has a small impact on the annual average OM/OC for urban sites, but for some rural sites including amine in the OM/OC calculations increased OM/OC by 0.1 or more.

  16. Improving size estimates of open animal populations by incorporating information on age

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.

    2003-01-01

    Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.

  17. Persons Camp Using Interpolation Method

    NASA Astrophysics Data System (ADS)

    Tawfiq, Luma Naji Mohammed; Najm Abood, Israa

    2018-05-01

    The aim of this paper is to estimate the rate of contaminated soils by using suitable interpolation method as an alternative accurate tool to evaluate the concentration of heavy metals in soil then compared with standard universal value to determine the rate of contamination in the soil. In particular, interpolation methods are extensively applied in the models of the different phenomena where experimental data must be used in computer studies where expressions of those data are required. In this paper the extended divided difference method in two dimensions is used to solve suggested problem. Then, the modification method is applied to estimate the rate of contaminated soils of displaced persons camp in Diyala Governorate, in Iraq.

  18. Alternative Methods of Accounting for Underreporting and Overreporting When Measuring Dietary Intake-Obesity Relations

    PubMed Central

    Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A

    2011-01-01

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302

  19. Alternative methods of accounting for underreporting and overreporting when measuring dietary intake-obesity relations.

    PubMed

    Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A

    2011-02-15

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.

  20. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  1. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  2. Comparison of spectral estimators for characterizing fractionated atrial electrograms

    PubMed Central

    2013-01-01

    Background Complex fractionated atrial electrograms (CFAE) acquired during atrial fibrillation (AF) are commonly assessed using the discrete Fourier transform (DFT), but this can lead to inaccuracy. In this study, spectral estimators derived by averaging the autocorrelation function at lags were compared to the DFT. Method Bipolar CFAE of at least 16 s duration were obtained from pulmonary vein ostia and left atrial free wall sites (9 paroxysmal and 10 persistent AF patients). Power spectra were computed using the DFT and three other methods: 1. a novel spectral estimator based on signal averaging (NSE), 2. the NSE with harmonic removal (NSH), and 3. the autocorrelation function average at lags (AFA). Three spectral parameters were calculated: 1. the largest fundamental spectral peak, known as the dominant frequency (DF), 2. the DF amplitude (DA), and 3. the mean spectral profile (MP), which quantifies noise floor level. For each spectral estimator and parameter, the significance of the difference between paroxysmal and persistent AF was determined. Results For all estimators, mean DA and mean DF values were higher in persistent AF, while the mean MP value was higher in paroxysmal AF. The differences in means between paroxysmals and persistents were highly significant for 3/3 NSE and NSH measurements and for 2/3 DFT and AFA measurements (p<0.001). For all estimators, the standard deviation in DA and MP values were higher in persistent AF, while the standard deviation in DF value was higher in paroxysmal AF. Differences in standard deviations between paroxysmals and persistents were highly significant in 2/3 NSE and NSH measurements, in 1/3 AFA measurements, and in 0/3 DFT measurements. Conclusions Measurements made from all four spectral estimators were in agreement as to whether the means and standard deviations in three spectral parameters were greater in CFAEs acquired from paroxysmal or in persistent AF patients. Since the measurements were consistent, use of two or more of these estimators for power spectral analysis can be assistive to evaluate CFAE more objectively and accurately, which may lead to improved clinical outcome. Since the most significant differences overall were achieved using the NSE and NSH estimators, parameters measured from their spectra will likely be the most useful for detecting and discerning electrophysiologic differences in the AF substrate based upon frequency analysis of CFAE. PMID:23855345

  3. Emergency Physician Estimation of Blood Loss

    PubMed Central

    Ashburn, Jeffery C.; Harrison, Tamara; Ham, James J.; Strote, Jared

    2012-01-01

    Introduction Emergency physicians (EP) frequently estimate blood loss, which can have implications for clinical care. The objectives of this study were to examine EP accuracy in estimating blood loss on different surfaces and compare attending physician and resident performance. Methods A sample of 56 emergency department (ED) physicians (30 attending physicians and 26 residents) were asked to estimate the amount of moulage blood present in 4 scenarios: 500 mL spilled onto an ED cot; 25 mL spilled onto a 10-pack of 4 × 4-inch gauze; 100 mL on a T-shirt; and 150 mL in a commode filled with water. Standard estimate error (the absolute value of (estimated volume − actual volume)/actual volume × 100) was calculated for each estimate. Results The mean standard error for all estimates was 116% with a range of 0% to 1233%. Only 8% of estimates were within 20% of the true value. Estimates were most accurate for the sheet scenario and worst for the commode scenario. Residents and attending physicians did not perform significantly differently (P > 0.05). Conclusion Emergency department physicians do not estimate blood loss well in a variety of scenarios. Such estimates could potentially be misleading if used in clinical decision making. Clinical experience does not appear to improve estimation ability in this limited study. PMID:22942938

  4. The "covariation method" for estimating the parameters of the standard Dynamic Energy Budget model II: Properties and preliminary patterns

    NASA Astrophysics Data System (ADS)

    Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.

  5. Prognostic score–based balance measures for propensity score methods in comparative effectiveness research

    PubMed Central

    Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.

    2013-01-01

    Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158

  6. Population viability analysis with species occurrence data from museum collections.

    PubMed

    Skarpaas, Olav; Stabbetorp, Odd E

    2011-06-01

    The most comprehensive data on many species come from scientific collections. Thus, we developed a method of population viability analysis (PVA) in which this type of occurrence data can be used. In contrast to classical PVA, our approach accounts for the inherent observation error in occurrence data and allows the estimation of the population parameters needed for viability analysis. We tested the sensitivity of the approach to spatial resolution of the data, length of the time series, sampling effort, and detection probability with simulated data and conducted PVAs for common, rare, and threatened species. We compared the results of these PVAs with results of standard method PVAs in which observation error is ignored. Our method provided realistic estimates of population growth terms and quasi-extinction risk in cases in which the standard method without observation error could not. For low values of any of the sampling variables we tested, precision decreased, and in some cases biased estimates resulted. The results of our PVAs with the example species were consistent with information in the literature on these species. Our approach may facilitate PVA for a wide range of species of conservation concern for which demographic data are lacking but occurrence data are readily available. ©2011 Society for Conservation Biology.

  7. Determining the slag fraction, water/binder ratio and degree of hydration in hardened cement pastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yio, M.H.N., E-mail: marcus.yio11@imperial.ac.uk; Phelan, J.C.; Wong, H.S.

    2014-02-15

    A method for determining the original mix composition of hardened slag-blended cement-based materials based on analysis of backscattered electron images combined with loss on ignition measurements is presented. The method does not require comparison to reference standards or prior knowledge of the composition of the binders used. Therefore, it is well-suited for application to real structures. The method is also able to calculate the degrees of reaction of slag and cement. Results obtained from an experimental study involving sixty samples with a wide range of water/binder (w/b) ratios (0.30 to 0.50), slag/binder ratios (0 to 0.6) and curing ages (3more » days to 1 year) show that the method is very promising. The mean absolute errors for the estimated slag, water and cement contents (kg/m{sup 3}), w/b and s/b ratios were 9.1%, 1.5%, 2.5%, 4.7% and 8.7%, respectively. 91% of the estimated w/b ratios were within 0.036 of the actual values. -- Highlights: •A new method for estimating w/b ratio and slag content in cement pastes is proposed. •The method is also able to calculate the degrees of reaction of slag and cement. •Reference standards or prior knowledge of the binder composition are not required. •The method was tested on samples with varying w/b ratios and slag content.« less

  8. Evaluation of Alternative Difference-in-Differences Methods

    ERIC Educational Resources Information Center

    Yu, Bing

    2013-01-01

    Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…

  9. A method for estimating operability and location of the timber resource.

    Treesearch

    John S. Jr. Spencer; Mark H. Hansen; Pamela J. Jakes

    1986-01-01

    Operability is the relative ease or difficulty of managing or harvesting timber because of physical conditions in the stand or on the site. Using data collected during standard statewide forest inventories, we developed a method for classifying timber by operability class based on seven operability components.

  10. Calibration-free assays on standard real-time PCR devices

    PubMed Central

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-01-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration. PMID:28327545

  11. Calibration-free assays on standard real-time PCR devices

    NASA Astrophysics Data System (ADS)

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-03-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration.

  12. Adaptive Quantification and Longitudinal Analysis of Pulmonary Emphysema with a Hidden Markov Measure Field Model

    PubMed Central

    Häme, Yrjö; Angelini, Elsa D.; Hoffman, Eric A.; Barr, R. Graham; Laine, Andrew F.

    2014-01-01

    The extent of pulmonary emphysema is commonly estimated from CT images by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions in the lung and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the present model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was used to quantify emphysema on a cohort of 87 subjects, with repeated CT scans acquired over a time period of 8 years using different imaging protocols. The scans were acquired approximately annually, and the data set included a total of 365 scans. The results show that the emphysema estimates produced by the proposed method have very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. In addition, the generated emphysema delineations promise great advantages for regional analysis of emphysema extent and progression, possibly advancing disease subtyping. PMID:24759984

  13. Generalizing Evidence From Randomized Clinical Trials to Target Populations

    PubMed Central

    Cole, Stephen R.; Stuart, Elizabeth A.

    2010-01-01

    Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results. PMID:20547574

  14. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  15. [Application of robustness test for assessment of the measurement uncertainty at the end of development phase of a chromatographic method for quantification of water-soluble vitamins].

    PubMed

    Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T

    2016-05-01

    We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  16. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  17. Extracting concrete thermal characteristics from temperature time history of RC column exposed to standard fire.

    PubMed

    Kim, Jung J; Youm, Kwang-Soo; Reda Taha, Mahmoud M

    2014-01-01

    A numerical method to identify thermal conductivity from time history of one-dimensional temperature variations in thermal unsteady-state is proposed. The numerical method considers the change of specific heat and thermal conductivity with respect to temperature. Fire test of reinforced concrete (RC) columns was conducted using a standard fire to obtain time history of temperature variations in the column section. A thermal equilibrium model in unsteady-state condition was developed. The thermal conductivity of concrete was then determined by optimizing the numerical solution of the model to meet the observed time history of temperature variations. The determined thermal conductivity with respect to temperature was then verified against standard thermal conductivity measurements of concrete bricks. It is concluded that the proposed method can be used to conservatively estimate thermal conductivity of concrete for design purpose. Finally, the thermal radiation properties of concrete for the RC column were estimated from the thermal equilibrium at the surface of the column. The radiant heat transfer ratio of concrete representing absorptivity to emissivity ratio of concrete during fire was evaluated and is suggested as a concrete criterion that can be used in fire safety assessment.

  18. Extracting Concrete Thermal Characteristics from Temperature Time History of RC Column Exposed to Standard Fire

    PubMed Central

    2014-01-01

    A numerical method to identify thermal conductivity from time history of one-dimensional temperature variations in thermal unsteady-state is proposed. The numerical method considers the change of specific heat and thermal conductivity with respect to temperature. Fire test of reinforced concrete (RC) columns was conducted using a standard fire to obtain time history of temperature variations in the column section. A thermal equilibrium model in unsteady-state condition was developed. The thermal conductivity of concrete was then determined by optimizing the numerical solution of the model to meet the observed time history of temperature variations. The determined thermal conductivity with respect to temperature was then verified against standard thermal conductivity measurements of concrete bricks. It is concluded that the proposed method can be used to conservatively estimate thermal conductivity of concrete for design purpose. Finally, the thermal radiation properties of concrete for the RC column were estimated from the thermal equilibrium at the surface of the column. The radiant heat transfer ratio of concrete representing absorptivity to emissivity ratio of concrete during fire was evaluated and is suggested as a concrete criterion that can be used in fire safety assessment. PMID:25180197

  19. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  20. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    NASA Technical Reports Server (NTRS)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated temperatures.

  1. Sampling theory and automated simulations for vertical sections, applied to human brain.

    PubMed

    Cruz-Orive, L M; Gelšvartas, J; Roberts, N

    2014-02-01

    In recent years, there have been substantial developments in both magnetic resonance imaging techniques and automatic image analysis software. The purpose of this paper is to develop stereological image sampling theory (i.e. unbiased sampling rules) that can be used by image analysts for estimating geometric quantities such as surface area and volume, and to illustrate its implementation. The methods will ideally be applied automatically on segmented, properly sampled 2D images - although convenient manual application is always an option - and they are of wide applicability in many disciplines. In particular, the vertical sections design to estimate surface area is described in detail and applied to estimate the area of the pial surface and of the boundary between cortex and underlying white matter (i.e. subcortical surface area). For completeness, cortical volume and mean cortical thickness are also estimated. The aforementioned surfaces were triangulated in 3D with the aid of FreeSurfer software, which provided accurate surface area measures that served as gold standards. Furthermore, a software was developed to produce digitized trace curves of the triangulated target surfaces automatically from virtual sections. From such traces, a new method (called the 'lambda method') is presented to estimate surface area automatically. In addition, with the new software, intersections could be counted automatically between the relevant surface traces and a cycloid test grid for the classical design. This capability, together with the aforementioned gold standard, enabled us to thoroughly check the performance and the variability of the different estimators by Monte Carlo simulations for studying the human brain. In particular, new methods are offered to split the total error variance into the orientations, sectioning and cycloid components. The latter prediction was hitherto unavailable--one is proposed here and checked by way of simulations on a given set of digitized vertical sections with automatically superimposed cycloid grids of three different sizes. Concrete and detailed recommendations are given to implement the methods. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  2. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  3. Risk of Anal Cancer in Women With a Human Papillomavirus-Related Gynecological Neoplasm: Puerto Rico 1987-2013.

    PubMed

    Acevedo-Fontánez, Adrianna I; Suárez, Erick; Torres Cintrón, Carlos R; Ortiz, Ana P

    2018-04-11

    The aim of the study was to estimate the magnitude of the association between HPV-related gynecological neoplasms and secondary anal cancer among women in Puerto Rico (PR). We identified 9,489 women who had been diagnosed with a primary cervical, vaginal, or vulvar tumor during 1987-2013. To describe the trends of invasive cervical, vulvar, vaginal, and anal cancer, the age-adjusted incidence rates were estimated using the direct method (2000 US as Standard Population). Standardized incidence ratios (observed/expected) were computed using the indirect method; expected cases were calculated using 2 methods based on age-specific rates of anal cancer in PR. The ratio of standardized incidence ratios of anal cancer was estimated using the Poisson regression model to estimate the magnitude of the association between HPV-gynecologic neoplasms and secondary anal cancer. A significant increase in the incidence trend for anal cancer was observed from 1987 to 2013 (annual percent change = 1.1, p < .05), whereas from 2004 to 2013, an increase was observed for cervical cancer incidence (annual percent change = 3.3, p < .05). The risk of secondary anal cancer among women with HPV-related gynecological cancers was approximately 3 times this risk among women with non-HPV-related gynecological cancers (relative risk = 3.27, 95% CI = 1.37 to 7.79). Anal cancer is increasing among women in PR. Women with gynecological HPV-related tumors are at higher risk of secondary anal cancer as compared with women from the general population and with those with non-HPV-related gynecological cancers. Appropriate anal cancer screening guidelines for high-risk populations are needed, including women with HPV-related gynecological malignancies and potentially other cancer survivors.

  4. US County-Level Trends in Mortality Rates for Major Causes of Death, 1980-2014.

    PubMed

    Dwyer-Lindgren, Laura; Bertozzi-Villa, Amelia; Stubbs, Rebecca W; Morozoff, Chloe; Kutz, Michael J; Huynh, Chantal; Barber, Ryan M; Shackelford, Katya A; Mackenbach, Johan P; van Lenthe, Frank J; Flaxman, Abraham D; Naghavi, Mohsen; Mokdad, Ali H; Murray, Christopher J L

    2016-12-13

    County-level patterns in mortality rates by cause have not been systematically described but are potentially useful for public health officials, clinicians, and researchers seeking to improve health and reduce geographic disparities. To demonstrate the use of a novel method for county-level estimation and to estimate annual mortality rates by US county for 21 mutually exclusive causes of death from 1980 through 2014. Redistribution methods for garbage codes (implausible or insufficiently specific cause of death codes) and small area estimation methods (statistical methods for estimating rates in small subpopulations) were applied to death registration data from the National Vital Statistics System to estimate annual county-level mortality rates for 21 causes of death. These estimates were raked (scaled along multiple dimensions) to ensure consistency between causes and with existing national-level estimates. Geographic patterns in the age-standardized mortality rates in 2014 and in the change in the age-standardized mortality rates between 1980 and 2014 for the 10 highest-burden causes were determined. County of residence. Cause-specific age-standardized mortality rates. A total of 80 412 524 deaths were recorded from January 1, 1980, through December 31, 2014, in the United States. Of these, 19.4 million deaths were assigned garbage codes. Mortality rates were analyzed for 3110 counties or groups of counties. Large between-county disparities were evident for every cause, with the gap in age-standardized mortality rates between counties in the 90th and 10th percentiles varying from 14.0 deaths per 100 000 population (cirrhosis and chronic liver diseases) to 147.0 deaths per 100 000 population (cardiovascular diseases). Geographic regions with elevated mortality rates differed among causes: for example, cardiovascular disease mortality tended to be highest along the southern half of the Mississippi River, while mortality rates from self-harm and interpersonal violence were elevated in southwestern counties, and mortality rates from chronic respiratory disease were highest in counties in eastern Kentucky and western West Virginia. Counties also varied widely in terms of the change in cause-specific mortality rates between 1980 and 2014. For most causes (eg, neoplasms, neurological disorders, and self-harm and interpersonal violence), both increases and decreases in county-level mortality rates were observed. In this analysis of US cause-specific county-level mortality rates from 1980 through 2014, there were large between-county differences for every cause of death, although geographic patterns varied substantially by cause of death. The approach to county-level analyses with small area models used in this study has the potential to provide novel insights into US disease-specific mortality time trends and their differences across geographic regions.

  5. Validation of a Method To Screen for Pulmonary Hypertension in Advanced Idiopathic Pulmonary Fibrosis*

    PubMed Central

    Zisman, David A.; Karlamangla, Arun S.; Kawut, Steven M.; Shlobin, Oksana A.; Saggar, Rajeev; Ross, David J.; Schwarz, Marvin I.; Belperio, John A.; Ardehali, Abbas; Lynch, Joseph P.; Nathan, Steven D.

    2008-01-01

    Background We have developed a method to screen for pulmonary hypertension (PH) in idiopathic pulmonary fibrosis (IPF) patients, based on a formula to predict mean pulmonary artery pressure (MPAP) from standard lung function measurements. The objective of this study was to validate this method in a separate group of IPF patients. Methods Cross-sectional study of 60 IPF patients from two institutions. The accuracy of the MPAP estimation was assessed by examining the correlation between the predicted and measured MPAPs and the magnitude of the estimation error. The discriminatory ability of the method for PH was assessed using the area under the receiver operating characteristic curve (AUC). Results There was strong correlation in the expected direction between the predicted and measured MPAPs (r = 0.72; p < 0.0001). The estimated MPAP was within 5 mm Hg of the measured MPAP 72% of the time. The AUC for predicting PH was 0.85, and did not differ by institution. A formula-predicted MPAP > 21 mm Hg was associated with a sensitivity, specificity, positive predictive value, and negative predictive value of 95%, 58%, 51%, and 96%, respectively, for PH defined as MPAP from right-heart catheterization > 25 mm Hg. Conclusions A prediction formula for MPAP using standard lung function measurements can be used to screen for PH in IPF patients. PMID:18198245

  6. The best alternative for estimating reference crop evapotranspiration in different sub-regions of mainland China.

    PubMed

    Peng, Lingling; Li, Yi; Feng, Hao

    2017-07-14

    Reference crop evapotranspiration (ET o ) is a critically important parameter for climatological, hydrological and agricultural management. The FAO56 Penman-Monteith (PM) equation has been recommended as the standardized ET o (ET o,s ) equation, but it has a high requirements of climatic data. There is a practical need for finding a best alternative method to estimate ET o in the regions where full climatic data are lacking. A comprehensive comparison for the spatiotemporal variations, relative errors, standard deviations and Nash-Sutcliffe efficacy coefficients of monthly or annual ET o,s and ET o,i (i = 1, 2, …, 10) values estimated by 10 selected methods (i.e., Irmak et al., Makkink, Priestley-Taylor, Hargreaves-Samani, Droogers-Allen, Berti et al., Doorenbos-Pruitt, Wright and Valiantzas, respectively) using data at 552 sites over 1961-2013 in mainland China. The method proposed by Berti et al. (2014) was selected as the best alternative of FAO56-PM because it was simple in computation process, only utilized temperature data, had generally good accuracy in describing spatiotemporal characteristics of ET o,s in different sub-regions and mainland China, and correlated linearly to the FAO56-PM method very well. The parameters of the linear correlations between ET o of the two methods are calibrated for each site with the smallest determination of coefficient being 0.87.

  7. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  8. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, James L.; Udevitz, Mark S.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  9. Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds

    PubMed Central

    Deeks, J.J.; Martin, E.C.; Riley, R.D.

    2017-01-01

    Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347

  10. Modeling Freedom From Progression for Standard-Risk Medulloblastoma: A Mathematical Tumor Control Model With Multiple Modes of Failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodin, N. Patrik, E-mail: nils.patrik.brodin@rh.dk; Niels Bohr Institute, University of Copenhagen, Copenhagen; Vogelius, Ivan R.

    2013-10-01

    Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used tomore » model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.« less

  11. Robust Huber-based iterated divided difference filtering with application to cooperative localization of autonomous underwater vehicles.

    PubMed

    Gao, Wei; Liu, Yalong; Xu, Bo

    2014-12-19

    A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.

  12. A systematic review of the incidence and prevalence of comorbidity in multiple sclerosis: Overview

    PubMed Central

    Cohen, Jeffrey; Stuve, Olaf; Trojano, Maria; Sørensen, Per Soelberg; Reingold, Stephen; Cutter, Gary; Reider, Nadia

    2015-01-01

    Background: Comorbidity is an area of increasing interest in multiple sclerosis (MS). Objective: The objective of this review is to estimate the incidence and prevalence of comorbidity in people with MS and assess the quality of included studies. Methods: We searched the PubMed, SCOPUS, EMBASE and Web of Knowledge databases, conference proceedings, and reference lists of retrieved articles. Two reviewers independently screened abstracts. One reviewer abstracted data using a standardized form and the abstraction was verified by a second reviewer. We assessed study quality using a standardized approach. We quantitatively assessed population-based studies using the I2 statistic, and conducted random-effects meta-analyses. Results: We included 249 articles. Study designs were variable with respect to source populations, case definitions, methods of ascertainment and approaches to reporting findings. Prevalence was reported more frequently than incidence; estimates for prevalence and incidence varied substantially for all conditions. Heterogeneity was high. Conclusion: This review highlights substantial gaps in the epidemiological knowledge of comorbidity in MS worldwide. Little is known about comorbidity in Central or South America, Asia or Africa. Findings in North America and Europe are inconsistent. Future studies should report age-, sex- and ethnicity-specific estimates of incidence and prevalence, and standardize findings to a common population. PMID:25623244

  13. Estimating snag and large tree densities and distributions on a landscape for wildlife management.

    Treesearch

    Lisa J. Bate; Edward O. Garton; Michael J. Wisdom

    1999-01-01

    We provide efficient and accurate methods for sampling snags and large trees on a landscape to conduct compliance and effectiveness monitoring for wildlife in relation to the habitat standards and guidelines on National Forests. Included online are the necessary spreadsheets, macros, and instructions to conduct all surveys and analyses pertaining to estimation of snag...

  14. Dynamic RSA: Examining parasympathetic regulatory dynamics via vector-autoregressive modeling of time-varying RSA and heart period.

    PubMed

    Fisher, Aaron J; Reeves, Jonathan W; Chi, Cyrus

    2016-07-01

    Expanding on recently published methods, the current study presents an approach to estimating the dynamic, regulatory effect of the parasympathetic nervous system on heart period on a moment-to-moment basis. We estimated second-to-second variation in respiratory sinus arrhythmia (RSA) in order to estimate the contemporaneous and time-lagged relationships among RSA, interbeat interval (IBI), and respiration rate via vector autoregression. Moreover, we modeled these relationships at lags of 1 s to 10 s, in order to evaluate the optimal latency for estimating dynamic RSA effects. The IBI (t) on RSA (t-n) regression parameter was extracted from individual models as an operationalization of the regulatory effect of RSA on IBI-referred to as dynamic RSA (dRSA). Dynamic RSA positively correlated with standard averages of heart rate and negatively correlated with standard averages of RSA. We propose that dRSA reflects the active downregulation of heart period by the parasympathetic nervous system and thus represents a novel metric that provides incremental validity in the measurement of autonomic cardiac control-specifically, a method by which parasympathetic regulatory effects can be measured in process. © 2016 Society for Psychophysiological Research.

  15. Preliminary estimates of annual agricultural pesticide use for counties of the conterminous United States, 2010-11

    USGS Publications Warehouse

    Baker, Nancy T.; Stone, Wesley W.

    2013-01-01

    This report provides preliminary estimates of annual agricultural use of 374 pesticide compounds in counties of the conterminous United States in 2010 and 2011, compiled by means of methods described in Thelin and Stone (2013). U.S. Department of Agriculture (USDA) county-level data for harvested-crop acreage were used in conjunction with proprietary Crop Reporting District (CRD)-level pesticide-use data to estimate county-level pesticide use. Estimated pesticide use (EPest) values were calculated with both the EPest-high and EPest-low methods. The distinction between the EPest-high method and the EPest-low method is that there are more counties with estimated pesticide use for EPest-high compared to EPest-low, owing to differing assumptions about missing survey data (Thelin and Stone, 2013). Preliminary estimates in this report will be revised upon availability of updated crop acreages in the 2012 Agricultural Census, to be published by the USDA in 2014. In addition, estimates for 2008 and 2009 previously published by Stone (2013) will be updated subsequent to the 2012 Agricultural Census release. Estimates of annual agricultural pesticide use are provided as downloadable, tab-delimited files, which are organized by compound, year, state Federal Information Processing Standard (FIPS) code, county FIPS code, and kg (amount in kilograms).

  16. Improving the accuracy of burn-surface estimation.

    PubMed

    Nichter, L S; Williams, J; Bryant, C A; Edlich, R F

    1985-09-01

    A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.

  17. On protecting the planet against cosmic attack: Ultrafast real-time estimate of the asteroid's radial velocity

    NASA Astrophysics Data System (ADS)

    Zakharchenko, V. D.; Kovalenko, I. G.

    2014-05-01

    A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.

  18. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  19. Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles

    NASA Astrophysics Data System (ADS)

    Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey

    2013-09-01

    Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the general validity of the performance standard approach and suggested potential updates to improve the accuracy each of the example methods, especially to address reliability growth.

  20. Is there another major constituent in the atmosphere of Mars?. [radiogenic argon

    NASA Technical Reports Server (NTRS)

    Wood, G. P.

    1974-01-01

    In view of the possible finding of several tens percent of inert gas in the atmosphere of Mars by an instrument on the descent module of the USSR's Mars 6 spacecraft, the likelihood of the correctness of this result was examined. The basis for the well-known fact that the most likely candidate is radiogenic argon is described. It is shown that, for the two important methods of investigating the atmosphere, earth-based CO2 is infrared absorption spectroscopy and S-band occultation, within the estimated 1 standard deviation uncertainties of these methods about 20% argon can be accommodated. Within the estimated 3 standard deviation uncertainties, more than 35% is possible. It is also stated that even with 35% argon the maximum value of heat transfer rate on the Viking 75 entry vehicle does not exceed the design value.

Top