Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.
Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng
2018-04-15
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.
Particulate air pollution and panel studies in children: a systematic review
Ward, D; Ayres, J
2004-01-01
Aims: To systematically review the results of such studies in children, estimate summary measures of effect, and investigate potential sources of heterogeneity. Methods: Studies were identified by searching electronic databases to June 2002, including those where outcomes and particulate level measurements were made at least daily for ⩾8 weeks, and analysed using an appropriate regression model. Study results were compared using forest plots, and fixed and random effects summary effect estimates obtained. Publication bias was considered using a funnel plot. Results: Twenty two studies were identified, all except two reporting PM10 (24 hour mean) >50 µg.m-3. Reported effects of PM10 on PEF were widely spread and smaller than those for PM2.5 (fixed effects summary: -0.012 v -0.063 l.min-1 per µg.m-3 rise). A similar pattern was evident for symptoms. Random effects models produced larger estimates. Overall, in between-study comparisons, panels of children with diagnosed asthma or pre-existing respiratory symptoms appeared less affected by PM10 levels than those without, and effect estimates were larger where studies were conducted in higher ozone conditions. Larger PM10 effect estimates were obtained from studies using generalised estimating equations to model autocorrelation and where results were derived by pooling subject specific regression coefficients. A funnel plot of PM10 results for PEF was markedly asymmetrical. Conclusions: The majority of identified studies indicate an adverse effect of particulate air pollution that is greater for PM2.5 than PM10. However, results show considerable heterogeneity and there is evidence consistent with publication bias, so limited confidence may be placed on summary estimates of effect. The possibility of interaction between particle and ozone effects merits further investigation, as does variability due to analytical differences that alter the interpretation of final estimates. PMID:15031404
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Dechartres, Agnes; Bond, Elizabeth G; Scheer, Jordan; Riveros, Carolina; Atal, Ignacio; Ravaud, Philippe
2016-11-30
Publication bias and other reporting bias have been well documented for journal articles, but no study has evaluated the nature of results posted at ClinicalTrials.gov. We aimed to assess how many randomized controlled trials (RCTs) with results posted at ClinicalTrials.gov report statistically significant results and whether the proportion of trials with significant results differs when no treatment effect estimate or p-value is posted. We searched ClinicalTrials.gov in June 2015 for all studies with results posted. We included completed RCTs with a superiority hypothesis and considered results for the first primary outcome with results posted. For each trial, we assessed whether a treatment effect estimate and/or p-value was reported at ClinicalTrials.gov and if yes, whether results were statistically significant. If no treatment effect estimate or p-value was reported, we calculated the treatment effect and corresponding p-value using results per arm posted at ClinicalTrials.gov when sufficient data were reported. From the 17,536 studies with results posted at ClinicalTrials.gov, we identified 2823 completed phase 3 or 4 randomized trials with a superiority hypothesis. Of these, 1400 (50%) reported a treatment effect estimate and/or p-value. Results were statistically significant for 844 trials (60%), with a median p-value of 0.01 (Q1-Q3: 0.001-0.26). For the 1423 trials with no treatment effect estimate or p-value posted, we could calculate the treatment effect and corresponding p-value using results reported per arm for 929 (65%). For 494 trials (35%), p-values could not be calculated mainly because of insufficient reporting, censored data, or repeated measurements over time. For the 929 trials we could calculate p-values, we found statistically significant results for 342 (37%), with a median p-value of 0.19 (Q1-Q3: 0.005-0.59). Half of the trials with results posted at ClinicalTrials.gov reported a treatment effect estimate and/or p-value, with significant results for 60% of these. p-values could be calculated from results reported per arm at ClinicalTrials.gov for only 65% of the other trials. The proportion of significant results was much lower for these trials, which suggests a selective posting of treatment effect estimates and/or p-values when results are statistically significant.
Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.
Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim
2017-12-01
The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
ERIC Educational Resources Information Center
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
Estimating and Testing Mediation Effects with Censored Data
ERIC Educational Resources Information Center
Wang, Lijuan; Zhang, Zhiyong
2011-01-01
This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…
Generalizations and Extensions of the Probability of Superiority Effect Size Estimator
ERIC Educational Resources Information Center
Ruscio, John; Gera, Benjamin Lee
2013-01-01
Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…
Estimation of Effect Size from a Series of Experiments Involving Paired Comparisons.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
1993-01-01
A distribution theory is derived for a G. V. Glass-type (1976) estimator of effect size from studies involving paired comparisons. The possibility of combining effect sizes from studies involving a mixture of related and unrelated samples is also explored. Resulting estimates are illustrated using data from previous psychiatric research. (SLD)
The effect of heart motion on parameter bias in dynamic cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, S.G.; Gullberg, G.T.; Huesman, R.H.
1996-12-31
Dynamic cardiac SPECT can be used to estimate kinetic rate parameters which describe the wash-in and wash-out of tracer activity between the blood and the myocardial tissue. These kinetic parameters can in turn be correlated to myocardial perfusion. There are, however, many physical aspects associated with dynamic SPECT which can introduce errors into the estimates. This paper describes a study which investigates the effect of heart motion on kinetic parameter estimates. Dynamic SPECT simulations are performed using a beating version of the MCAT phantom. The results demonstrate that cardiac motion has a significant effect on the blood, tissue, and backgroundmore » content of regions of interest. This in turn affects estimates of wash-in, while it has very little effect on estimates of wash-out. The effect of cardiac motion on parameter estimates appears not to be as great as effects introduced by photon noise and geometric collimator response. It is also shown that cardiac motion results in little extravascular contamination of the left ventricle blood region of interest.« less
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Estimation of the Continuous and Discontinuous Leverage Effects
Aït-Sahalia, Yacine; Fan, Jianqing; Laeven, Roger J. A.; Wang, Christina Dan; Yang, Xiye
2017-01-01
This paper examines the leverage effect, or the generally negative covariation between asset returns and their changes in volatility, under a general setup that allows the log-price and volatility processes to be Itô semimartingales. We decompose the leverage effect into continuous and discontinuous parts and develop statistical methods to estimate them. We establish the asymptotic properties of these estimators. We also extend our methods and results (for the continuous leverage) to the situation where there is market microstructure noise in the observed returns. We show in Monte Carlo simulations that our estimators have good finite sample performance. When applying our methods to real data, our empirical results provide convincing evidence of the presence of the two leverage effects, especially the discontinuous one. PMID:29606780
Estimation of the Continuous and Discontinuous Leverage Effects.
Aït-Sahalia, Yacine; Fan, Jianqing; Laeven, Roger J A; Wang, Christina Dan; Yang, Xiye
2017-01-01
This paper examines the leverage effect, or the generally negative covariation between asset returns and their changes in volatility, under a general setup that allows the log-price and volatility processes to be Itô semimartingales. We decompose the leverage effect into continuous and discontinuous parts and develop statistical methods to estimate them. We establish the asymptotic properties of these estimators. We also extend our methods and results (for the continuous leverage) to the situation where there is market microstructure noise in the observed returns. We show in Monte Carlo simulations that our estimators have good finite sample performance. When applying our methods to real data, our empirical results provide convincing evidence of the presence of the two leverage effects, especially the discontinuous one.
Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22
ERIC Educational Resources Information Center
Braun, Henry; Qu, Yanxuan
2008-01-01
This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…
A Novel Methodology to Estimate the Treatment Effect in Presence of Highly Variable Placebo Response
Gomeni, Roberto; Goyal, Navin; Bressolle, Françoise; Fava, Maurizio
2015-01-01
One of the main reasons for the inefficiency of multicenter randomized clinical trials (RCTs) in depression is the excessively high level of placebo response. The aim of this work was to propose a novel methodology to analyze RCTs based on the assumption that centers with high placebo response are less informative than the other centers for estimating the ‘true' treatment effect (TE). A linear mixed-effect modeling approach for repeated measures (MMRM) was used as a reference approach. The new method for estimating TE was based on a nonlinear longitudinal modeling of clinical scores (NLMMRM). NLMMRM estimates TE by associating a weighting factor to the data collected in each center. The weight was defined by the posterior probability of detecting a clinically relevant difference between active treatment and placebo at that center. Data from five RCTs in depression were used to compare the performance of MMRM with NLMMRM. The results of the analyses showed an average improvement of ~15% in the TE estimated with NLMMRM when the center effect was included in the analyses. Opposite results were observed with MMRM: TE estimate was reduced by ~4% when the center effect was considered as covariate in the analysis. The novel NLMMRM approach provides a tool for controlling the confounding effect of high placebo response, to increase signal detection and to provide a more reliable estimate of the ‘true' TE by controlling false negative results associated with excessively high placebo response. PMID:25895454
NASA Astrophysics Data System (ADS)
Ponte Castañeda, Pedro
2016-11-01
This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.
Phantom-derived estimation of effective dose equivalent from X rays with and without a lead apron.
Mateya, C F; Claycamp, H G
1997-06-01
Organ dose equivalents were measured in a humanoid phantom in order to estimate effective dose equivalent (H(E)) and effective dose (E) from low-energy x rays and in the presence or absence of a protective lead apron. Plane-parallel irradiation conditions were approximated using direct x-ray beams of 76 and 104 kVp and resulting dosimetry data was adjusted to model exposures conditions in fluoroscopy settings. Values of H(E) and E estimated under-shielded conditions were compared to the results of several recent studies that used combinations of measured and calculated dosimetry to model exposures to radiologists. While the estimates of H(E) and E without the lead apron were within 0.2 to 20% of expected values, estimates based on personal monitors worn at the (phantom) waist (underneath the apron) underestimated either H(E) or E while monitors placed at the neck (above the apron) significantly overestimated both quantities. Also, the experimentally determined H(E) and E were 1.4 to 3.3 times greater than might be estimated using recently reported "two-monitor" algorithms for the estimation of effective dose quantities. The results suggest that accurate estimation of either H(E) or E from personal monitors under conditions of partial body exposures remains problematic and is likely to require the use of multiple monitors.
The performance of different propensity score methods for estimating marginal hazard ratios.
Austin, Peter C
2013-07-20
Propensity score methods are increasingly being used to reduce or minimize the effects of confounding when estimating the effects of treatments, exposures, or interventions when using observational or non-randomized data. Under the assumption of no unmeasured confounders, previous research has shown that propensity score methods allow for unbiased estimation of linear treatment effects (e.g., differences in means or proportions). However, in biomedical research, time-to-event outcomes occur frequently. There is a paucity of research into the performance of different propensity score methods for estimating the effect of treatment on time-to-event outcomes. Furthermore, propensity score methods allow for the estimation of marginal or population-average treatment effects. We conducted an extensive series of Monte Carlo simulations to examine the performance of propensity score matching (1:1 greedy nearest-neighbor matching within propensity score calipers), stratification on the propensity score, inverse probability of treatment weighting (IPTW) using the propensity score, and covariate adjustment using the propensity score to estimate marginal hazard ratios. We found that both propensity score matching and IPTW using the propensity score allow for the estimation of marginal hazard ratios with minimal bias. Of these two approaches, IPTW using the propensity score resulted in estimates with lower mean squared error when estimating the effect of treatment in the treated. Stratification on the propensity score and covariate adjustment using the propensity score result in biased estimation of both marginal and conditional hazard ratios. Applied researchers are encouraged to use propensity score matching and IPTW using the propensity score when estimating the relative effect of treatment on time-to-event outcomes. Copyright © 2012 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Yang, Ji Seung; Cai, Li
2014-01-01
The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…
Revised estimates for direct-effect recreational jobs in the interior Columbia River basin.
Lisa K. Crone; Richard W. Haynes
1999-01-01
This paper reviews the methodology used to derive the original estimates for direct employment associated with recreation on Federal lands in the interior Columbia River basin (the basin), and details the changes in methodology and data used to derive new estimates. The new analysis resulted in an estimate of 77,655 direct-effect jobs associated with recreational...
On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra
NASA Technical Reports Server (NTRS)
Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.
2007-01-01
This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Small Family, Smart Family? Family Size and the IQ Scores of Young Men
ERIC Educational Resources Information Center
Black, Sandra E.; Devereux, Paul J.; Salvanes, Kjell G.
2010-01-01
This paper uses Norwegian data to estimate the effect of family size on IQ scores of men. Instrumental variables (IV) estimates using sex composition as an instrument show no significant negative effect of family size; however, IV estimates using twins imply that family size has a negative effect on IQ scores. Our results suggest that the effect…
Fretheim, Atle; Soumerai, Stephen B; Zhang, Fang; Oxman, Andrew D; Ross-Degnan, Dennis
2013-08-01
We reanalyzed the data from a cluster-randomized controlled trial (C-RCT) of a quality improvement intervention for prescribing antihypertensive medication. Our objective was to estimate the effectiveness of the intervention using both interrupted time-series (ITS) and RCT methods, and to compare the findings. We first conducted an ITS analysis using data only from the intervention arm of the trial because our main objective was to compare the findings from an ITS analysis with the findings from the C-RCT. We used segmented regression methods to estimate changes in level or slope coincident with the intervention, controlling for baseline trend. We analyzed the C-RCT data using generalized estimating equations. Last, we estimated the intervention effect by including data from both study groups and by conducting a controlled ITS analysis of the difference between the slope and level changes in the intervention and control groups. The estimates of absolute change resulting from the intervention were ITS analysis, 11.5% (95% confidence interval [CI]: 9.5, 13.5); C-RCT, 9.0% (95% CI: 4.9, 13.1); and the controlled ITS analysis, 14.0% (95% CI: 8.6, 19.4). ITS analysis can provide an effect estimate that is concordant with the results of a cluster-randomized trial. A broader range of comparisons from other RCTs would help to determine whether these are generalizable results. Copyright © 2013 Elsevier Inc. All rights reserved.
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Grafova, Irina; Freedman, Vicki; Lurie, Nicole; Kumar, Rizie; Rogowski, Jeannette
2013-01-01
This paper uses the difference-in-difference estimation approach to explore the self-selection bias in estimating the effect of neighborhood economic environment on self-assessed health among older adults. The results indicate that there is evidence of downward bias in the conventional estimates of the effect of neighborhood economic disadvantage on self-reported health, representing a lower bound of the true effect. PMID:23623818
Estimated collective effective dose to the population from nuclear medicine examinations in Slovenia
Skrk, Damijan; Zontar, Dejan
2013-01-01
Background A national survey of patient exposure from nuclear medicine diagnostic procedures was performed by Slovenian Radiation Protection Administration in order to estimate their contribution to the collective effective dose to the population of Slovenia. Methods A set of 36 examinations with the highest contributions to the collective effective dose was identified. Data about frequencies and average administered activities of radioisotopes used for those examinations were collected from all nuclear medicine departments in Slovenia. A collective effective dose to the population and an effective dose per capita were estimated from the collected data using dose conversion factors. Results The total collective effective dose to the population from nuclear medicine diagnostic procedures in 2011 was estimated to 102 manSv, giving an effective dose per capita of 0.05 mSv. Conclusions The comparison of results of this study with studies performed in other countries indicates that the nuclear medicine providers in Slovenia are well aware of the importance of patient protection measures and of optimisation of procedures. PMID:24133396
Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.
2015-01-01
A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data could result in better informed management decisions and assist in guidance for more effective estuarine restoration projects.
A New Method for Assessing How Sensitivity and Specificity of Linkage Studies Affects Estimation
Moore, Cecilia L.; Amin, Janaki; Gidding, Heather F.; Law, Matthew G.
2014-01-01
Background While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described. Methods We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed. Discussion Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed. PMID:25068293
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.
Austin, Peter C
2017-02-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching
2016-01-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071
NASA Astrophysics Data System (ADS)
Ojeda, GermáN. Y.; Whitman, Dean
2002-11-01
The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.
Effects of Blood-Alcohol Concentration (BAC) Feedback on BAC Estimates Over Time
ERIC Educational Resources Information Center
Bullers, Susan; Ennis, Melissa
2006-01-01
This study examines the effects of self-tested blood alcohol concentration (BAC) feedback, from personal hand-held breathalyzers, on the accuracy of BAC estimation. Using an e-mail prompted web-based questionnaire, 19 participants were asked to report both BAC estimates and subsequently measured BAC levels over the course of 27 days. Results from…
Evaluating disease management programme effectiveness: an introduction to instrumental variables.
Linden, Ariel; Adams, John L
2006-04-01
This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.
Estimating the cost-effectiveness of 54 weeks of infliximab for rheumatoid arthritis.
Wong, John B; Singh, Gurkirpal; Kavanaugh, Arthur
2002-10-01
To estimate the cost-effectiveness of infliximab plus methotrexate for active, refractory rheumatoid arthritis. We projected the 54-week results from a randomized controlled trial of infliximab into lifetime economic and clinical outcomes using a Markov computer simulation model. Direct and indirect costs, quality of life, and disability estimates were based on trial results; Arthritis, Rheumatism, and Aging Medical Information System (ARAMIS) database outcomes; and published data. Results were discounted using the standard 3% rate. Because most well-accepted medical therapies have cost-effectiveness ratios below $50,000 to $100,000 per quality-adjusted life-year (QALY) gained, results below this range were considered to be "cost-effective." At 3 mg/kg, each infliximab infusion would cost $1393. When compared with methotrexate alone, 54 weeks of infliximab plus methotrexate decreased the likelihood of having advanced disability from 23% to 11% at the end of 54 weeks, which projected to a lifetime marginal cost-effectiveness ratio of $30,500 per discounted QALY gained, considering only direct medical costs. When applying a societal perspective and including indirect or productivity costs, the marginal cost-effectiveness ratio for infliximab was $9100 per discounted QALY gained. The results remained relatively unchanged with variation of model estimates over a broad range of values. Infliximab plus methotrexate for 54 weeks for rheumatoid arthritis should be cost-effective with its clinical benefit providing good value for the drug cost, especially when including productivity losses. Although infliximab beyond 54 weeks will likely be cost-effective, the economic and clinical benefit remains uncertain and will depend on long-term results of clinical trials.
An examination of effect estimation in factorial and standardly-tailored designs
Allore, Heather G; Murphy, Terrence E
2012-01-01
Background Many clinical trials are designed to test an intervention arm against a control arm wherein all subjects are equally eligible for all interventional components. Factorial designs have extended this to test multiple intervention components and their interactions. A newer design referred to as a ‘standardly-tailored’ design, is a multicomponent interventional trial that applies individual interventional components to modify risk factors identified a priori and tests whether health outcomes differ between treatment arms. Standardly-tailored designs do not require that all subjects be eligible for every interventional component. Although standardly-tailored designs yield an estimate for the net effect of the multicomponent intervention, it has not yet been shown if they permit separate, unbiased estimation of individual component effects. The ability to estimate the most potent interventional components has direct bearing on conducting second stage translational research. Purpose We present statistical issues related to the estimation of individual component effects in trials of geriatric conditions using factorial and standardly-tailored designs. The medical community is interested in second stage translational research involving the transfer of results from a randomized clinical trial to a community setting. Before such research is undertaken, main effects and synergistic and or antagonistic interactions between them should be identified. Knowledge of the relative strength and direction of the effects of the individual components and their interactions facilitates the successful transfer of clinically significant findings and may potentially reduce the number of interventional components needed. Therefore the current inability of the standardly-tailored design to provide unbiased estimates of individual interventional components is a serious limitation in their applicability to second stage translational research. Methods We discuss estimation of individual component effects from the family of factorial designs and this limitation for standardly-tailored designs. We use the phrase ‘factorial designs’ to describe full-factorial designs and their derivatives including the fractional factorial, partial factorial, incomplete factorial and modified reciprocal designs. We suggest two potential directions for designing multicomponent interventions to facilitate unbiased estimates of individual interventional components. Results Full factorial designs and their variants are the most common multicomponent trial design described in the literature and differ meaningfully from standardly-tailored designs. Factorial and standardly-tailored designs result in similar estimates of net effect with different levels of precision. Unbiased estimation of individual component effects from a standardly-tailored design will require new methodology. Limitations Although clinically relevant in geriatrics, previous applications of standardly-tailored designs have not provided unbiased estimates of the effects of individual interventional components. Discussion Future directions to estimate individual component effects from standardly-tailored designs include applying D-optimal designs and creating independent linear combinations of risk factors analogous to factor analysis. Conclusion Methods are needed to extract unbiased estimates of the effects of individual interventional components from standardly-tailored designs. PMID:18375650
Structural nested mean models for assessing time-varying effect moderation.
Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A
2010-03-01
This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
Pressler, Taylor R.; Kaizar, Eloise E.
2014-01-01
While randomized controlled trials (RCT) are considered the “gold standard” for clinical studies, the use of exclusion criteria may impact the external validity of the results. It is unknown whether estimators of effect size are biased by excluding a portion of the target population from enrollment. We propose to use observational data to estimate the bias due to enrollment restrictions, which we term generalizability bias. In this paper we introduce a class of estimators for the generalizability bias and use simulation to study its properties in the presence of non-constant treatment effects. We find the surprising result that our estimators can be unbiased for the true generalizability bias even when all potentially confounding variables are not measured. In addition, our proposed doubly robust estimator performs well even for mis-specified models. PMID:23553373
Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.
2016-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768
Cost-Effectiveness of Routine Screening for Critical Congenital Heart Disease in US Newborns
Peterson, Cora; Grosse, Scott D.; Oster, Matthew E.; Olney, Richard S.; Cassell, Cynthia H.
2015-01-01
OBJECTIVES Clinical evidence indicates newborn critical congenital heart disease (CCHD) screening through pulse oximetry is lifesaving. In 2011, CCHD was added to the US Recommended Uniform Screening Panel for newborns. Several states have implemented or are considering screening mandates. This study aimed to estimate the cost-effectiveness of routine screening among US newborns unsuspected of having CCHD. METHODS We developed a cohort model with a time horizon of infancy to estimate the inpatient medical costs and health benefits of CCHD screening. Model inputs were derived from new estimates of hospital screening costs and inpatient care for infants with late-detected CCHD, defined as no diagnosis at the birth hospital. We estimated the number of newborns with CCHD detected at birth hospitals and life-years saved with routine screening compared with no screening. RESULTS Screening was estimated to incur an additional cost of $6.28 per newborn, with incremental costs of $20 862 per newborn with CCHD detected at birth hospitals and $40 385 per life-year gained (2011 US dollars). We estimated 1189 more newborns with CCHD would be identified at birth hospitals and 20 infant deaths averted annually with screening. Another 1975 false-positive results not associated with CCHD were estimated to occur, although these results had a minimal impact on total estimated costs. CONCLUSIONS This study provides the first US cost-effectiveness analysis of CCHD screening in the United States could be reasonably cost-effective. We anticipate data from states that have recently approved or initiated CCHD screening will become available over the next few years to refine these projections. PMID:23918890
2018-01-01
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G‐computation. All methods resulted in essentially unbiased estimation of the population dose‐response function. However, GPS‐based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. PMID:29508424
Austin, Peter C
2018-05-20
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G-computation. All methods resulted in essentially unbiased estimation of the population dose-response function. However, GPS-based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
The manuscript reviews the issues concerning the use of results on pesticide effects from laboratory avian reproduction tests for estimating potential impacts of pesticides on fecundity rates in avian population models.
Brooks, John M; Chapman, Cole G; Schroeder, Mary C
2018-06-01
Patient-centred care requires evidence of treatment effects across many outcomes. Outcomes can be beneficial (e.g. increased survival or cure rates) or detrimental (e.g. adverse events, pain associated with treatment, treatment costs, time required for treatment). Treatment effects may also be heterogeneous across outcomes and across patients. Randomized controlled trials are usually insufficient to supply evidence across outcomes. Observational data analysis is an alternative, with the caveat that the treatments observed are choices. Real-world treatment choice often involves complex assessment of expected effects across the array of outcomes. Failure to account for this complexity when interpreting treatment effect estimates could lead to clinical and policy mistakes. Our objective was to assess the properties of treatment effect estimates based on choice when treatments have heterogeneous effects on both beneficial and detrimental outcomes across patients. Simulation methods were used to highlight the sensitivity of treatment effect estimates to the distributions of treatment effects across patients across outcomes. Scenarios with alternative correlations between benefit and detriment treatment effects across patients were used. Regression and instrumental variable estimators were applied to the simulated data for both outcomes. True treatment effect parameters are sensitive to the relationships of treatment effectiveness across outcomes in each study population. In each simulation scenario, treatment effect estimate interpretations for each outcome are aligned with results shown previously in single outcome models, but these estimates vary across simulated populations with the correlations of treatment effects across patients across outcomes. If estimator assumptions are valid, estimates across outcomes can be used to assess the optimality of treatment rates in a study population. However, because true treatment effect parameters are sensitive to correlations of treatment effects across outcomes, decision makers should be cautious about generalizing estimates to other populations.
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.
Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H
2010-02-01
Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A Portuguese value set for the SF-6D.
Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Brazier, John; Rowen, Donna
2010-08-01
The SF-6D is a preference-based measure of health derived from the SF-36 that can be used for cost-effectiveness analysis using cost-per-quality adjusted life-year analysis. This study seeks to estimate a system weight for the SF-6D for Portugal and to compare the results with the UK system weights. A sample of 55 health states defined by the SF-6D has been valued by a representative random sample of the Portuguese population, stratified by sex and age (n = 140), using the Standard Gamble (SG). Several models are estimated at both the individual and aggregate levels for predicting health-state valuations. Models with main effects, with interaction effects and with the constant forced to unity are presented. Random effects (RE) models are estimated using generalized least squares (GLS) regressions. Generalized estimation equations (GEE) are used to estimate RE models with the constant forced to unity. Estimations at the individual level were performed using 630 health-state valuations. Alternative functional forms are considered to account for the skewed distribution of health-state valuations. The models are analyzed in terms of their coefficients, overall fit, and the ability for predicting the SG-values. The RE models estimated using GLS and through GEE produce significant coefficients, which are robust across model specification. However, there are concerns regarding some inconsistent estimates, and so parsimonious consistent models were estimated. There is evidence of under prediction in some states assigned to poor health. The results are consistent with the UK results. The models estimated provide preference-based quality of life weights for the Portuguese population when health status data have been collected using the SF-36. Although the sample was randomly drowned findings should be treated with caution, given the small sample size, even knowing that they have been estimated at the individual level.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
2SLS versus 2SRI: Appropriate methods for rare outcomes and/or rare exposures.
Basu, Anirban; Coe, Norma B; Chapman, Cole G
2018-06-01
This study used Monte Carlo simulations to examine the ability of the two-stage least squares (2SLS) estimator and two-stage residual inclusion (2SRI) estimators with varying forms of residuals to estimate the local average and population average treatment effect parameters in models with binary outcome, endogenous binary treatment, and single binary instrument. The rarity of the outcome and the treatment was varied across simulation scenarios. Results showed that 2SLS generated consistent estimates of the local average treatment effects (LATE) and biased estimates of the average treatment effects (ATE) across all scenarios. 2SRI approaches, in general, produced biased estimates of both LATE and ATE under all scenarios. 2SRI using generalized residuals minimized the bias in ATE estimates. Use of 2SLS and 2SRI is illustrated in an empirical application estimating the effects of long-term care insurance on a variety of binary health care utilization outcomes among the near-elderly using the Health and Retirement Study. Copyright © 2018 John Wiley & Sons, Ltd.
Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.
Ryan, Andrew M; Burgess, James F; Dimick, Justin B
2015-08-01
To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.
Cost-effectiveness of community-based practitioner programmes in Ethiopia, Indonesia and Kenya.
McPake, Barbara; Edoka, Ijeoma; Witter, Sophie; Kielmann, Karina; Taegtmeyer, Miriam; Dieleman, Marjolein; Vaughan, Kelsey; Gama, Elvis; Kok, Maryse; Datiko, Daniel; Otiso, Lillian; Ahmed, Rukhsana; Squires, Neil; Suraratdecha, Chutima; Cometto, Giorgio
2015-09-01
To assess the cost-effectiveness of community-based practitioner programmes in Ethiopia, Indonesia and Kenya. Incremental cost-effectiveness ratios for the three programmes were estimated from a government perspective. Cost data were collected for 2012. Life years gained were estimated based on coverage of reproductive, maternal, neonatal and child health services. For Ethiopia and Kenya, estimates of coverage before and after the implementation of the programme were obtained from empirical studies. For Indonesia, coverage of health service interventions was estimated from routine data. We used the Lives Saved Tool to estimate the number of lives saved from changes in reproductive, maternal, neonatal and child health-service coverage. Gross domestic product per capita was used as the reference willingness-to-pay threshold value. The estimated incremental cost per life year gained was 82 international dollars ($)in Kenya, $999 in Ethiopia and $3396 in Indonesia. The results were most sensitive to uncertainty in the estimates of life-years gained. Based on the results of probabilistic sensitivity analysis, there was greater than 80% certainty that each programme was cost-effective. Community-based approaches are likely to be cost-effective for delivery of some essential health interventions where community-based practitioners operate within an integrated team supported by the health system. Community-based practitioners may be most appropriate in rural poor communities that have limited access to more qualified health professionals. Further research is required to understand which programmatic design features are critical to effectiveness.
Mixed effects versus fixed effects modelling of binary data with inter-subject variability.
Murphy, Valda; Dunne, Adrian
2005-04-01
The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.
Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey
Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.
2014-01-01
We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.
Investigation of the Specht density estimator
NASA Technical Reports Server (NTRS)
Speed, F. M.; Rydl, L. M.
1971-01-01
The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.
Effect of Visual Field Presentation on Action Planning (Estimating Reach) in Children
ERIC Educational Resources Information Center
Gabbard, Carl; Cordova, Alberto
2012-01-01
In this article, the authors examined the effects of target information presented in different visual fields (lower, upper, central) on estimates of reach via use of motor imagery in children (5-11 years old) and young adults. Results indicated an advantage for estimating reach movements for targets placed in lower visual field (LoVF), with all…
Lin, Feng; Lasry, Arielle; Sansom, Stephanie L; Wolitski, Richard J
2013-01-01
In the wake of a national economic downturn, the state of California, in 2009-2010, implemented budget cuts that eliminated state funding of HIV prevention and testing. To mitigate the effect of these cuts remaining federal funds were redirected. This analysis estimates the impact of these budget cuts and reallocation of resources on HIV transmission and associated HIV treatment costs. We estimated the effect of the budget cuts and reallocation for California county health departments (excluding Los Angeles and San Francisco) on the number of individuals living with or at-risk for HIV who received HIV prevention services. We used a Bernoulli model to estimate the number of new infections that would occur each year as a result of the changes, and assigned lifetime treatment costs to those new infections. We explored the effect of redirecting federal funds to more cost-effective programs, as well as the potential effect of allocating funds proportionately by transmission category. We estimated that cutting HIV prevention resulted in 55 new infections that were associated with $20 million in lifetime treatment costs. The redirection of federal funds to more cost-effective programs averted 15 HIV infections. If HIV prevention funding were allocated proportionately to transmission categories, we estimated that HIV infections could be reduced below the number that occurred annually before the state budget cuts. Reducing funding for HIV prevention may result in short-term savings at the expense of additional HIV infections and increased HIV treatment costs. Existing HIV prevention funds would likely have a greater impact on the epidemic if they were allocated to the more cost-effective programs and the populations most likely to acquire and transmit the infection.
McNew, Lance B.; Handel, Colleen M.
2015-01-01
Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multi-species occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real dataset of bird observations in northwestern Alaska, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our results indicate that neglecting the effects of site covariates on species detection probabilities may lead to significant bias in estimation of species richness, as well as the inferred relationships between community size and environmental covariates.
Contrasting Causal Effects of Workplace Interventions.
Izano, Monika A; Brown, Daniel M; Neophytou, Andreas M; Garcia, Erika; Eisen, Ellen A
2018-07-01
Occupational exposure guidelines are ideally based on estimated effects of static interventions that assign constant exposure over a working lifetime. Static effects are difficult to estimate when follow-up extends beyond employment because their identifiability requires additional assumptions. Effects of dynamic interventions that assign exposure while at work, allowing subjects to leave and become unexposed thereafter, are more easily identifiable but result in different estimates. Given the practical implications of exposure limits, we explored the drivers of the differences between static and dynamic interventions in a simulation study where workers could terminate employment because of an intermediate adverse health event that functions as a time-varying confounder. The two effect estimates became more similar with increasing strength of the health event and outcome relationship and with increasing time between health event and employment termination. Estimates were most dissimilar when the intermediate health event occurred early in employment, providing an effective screening mechanism.
Observed galaxy number counts on the lightcone up to second order: I. Main result
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Maartens, Roy; Clarkson, Chris, E-mail: daniele.bertacca@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: chris.clarkson@gmail.com
2014-09-01
We present the galaxy number overdensity up to second order in redshift space on cosmological scales for a concordance model. The result contains all general relativistic effects up to second order that arise from observing on the past light cone, including all redshift effects, lensing distortions from convergence and shear, and contributions from velocities, Sachs-Wolfe, integrated SW and time-delay terms. This result will be important for accurate calculation of the bias on estimates of non-Gaussianity and on precision parameter estimates, introduced by nonlinear projection effects.
Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George
2008-01-01
Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.
The Demand for Cigarettes in Tanzania and Implications for Tobacco Taxation Policy
Kidane, Asmerom; Mduma, John; Naho, Alexis; Ngeh, Ernest Tingum; Hu, Teh-wei
2016-01-01
The study attempts to estimate the demand for cigarettes in Tanzania and presents simulation results on the effect of the cigarette excise tax on smoking participation, government revenue, and related topics. After briefly summarizing the magnitude and spread of cigarette consumption in the country, the paper reviews some empirical estimates from African and other countries. The 2008 Tanzanian household budget survey was used to estimate the demand for cigarettes in Tanzania. The descriptive statistics suggest that the smoking prevalence for Tanzania is 15.35 percent with low variability across expenditure (income) groups. Smoking intensity and per capita consumption were estimated at 7.08 cigarettes and 1.33 cigarettes, respectively, a relatively low value. A two-part demand equation model was used to estimate various elasticities. For the overall equation, the price elasticities of smoking participation, smoking intensity, and total elasticity were estimated at −0.879, −0.853, and −1.732, respectively. Compared to similar results in other developing countries, the estimates appear quite high. When estimated by expenditure (income) groups, the magnitude of the elasticity appears higher among high expenditure groups than among low expenditure groups. Two simulation exercises were undertaken. First, the effect of different excise rates on smoking participation rate, cigarette consumption, tax revenue, and related responses was estimated and highlighted. Second, the same exercise was undertaken to determine the effect of a given increase in the cigarette excise tax on various expenditure groups. The overall results suggest that an increase in the excise tax on cigarettes in Tanzania would reduce cigarette consumption and increase government tax revenue. PMID:27358905
The Demand for Cigarettes in Tanzania and Implications for Tobacco Taxation Policy.
Kidane, Asmerom; Mduma, John; Naho, Alexis; Ngeh, Ernest Tingum; Hu, Teh-Wei
2015-10-01
The study attempts to estimate the demand for cigarettes in Tanzania and presents simulation results on the effect of the cigarette excise tax on smoking participation, government revenue, and related topics. After briefly summarizing the magnitude and spread of cigarette consumption in the country, the paper reviews some empirical estimates from African and other countries. The 2008 Tanzanian household budget survey was used to estimate the demand for cigarettes in Tanzania. The descriptive statistics suggest that the smoking prevalence for Tanzania is 15.35 percent with low variability across expenditure (income) groups. Smoking intensity and per capita consumption were estimated at 7.08 cigarettes and 1.33 cigarettes, respectively, a relatively low value. A two-part demand equation model was used to estimate various elasticities. For the overall equation, the price elasticities of smoking participation, smoking intensity, and total elasticity were estimated at -0.879, -0.853, and -1.732, respectively. Compared to similar results in other developing countries, the estimates appear quite high. When estimated by expenditure (income) groups, the magnitude of the elasticity appears higher among high expenditure groups than among low expenditure groups. Two simulation exercises were undertaken. First, the effect of different excise rates on smoking participation rate, cigarette consumption, tax revenue, and related responses was estimated and highlighted. Second, the same exercise was undertaken to determine the effect of a given increase in the cigarette excise tax on various expenditure groups. The overall results suggest that an increase in the excise tax on cigarettes in Tanzania would reduce cigarette consumption and increase government tax revenue.
Effect of Body Composition Methodology on Heritability Estimation of Body Fatness
Elder, Sonya J.; Roberts, Susan B.; McCrory, Megan A.; Das, Sai Krupa; Fuss, Paul J.; Pittas, Anastassios G.; Greenberg, Andrew S.; Heymsfield, Steven B.; Dawson-Hughes, Bess; Bouchard, Thomas J.; Saltzman, Edward; Neale, Michael C.
2014-01-01
Heritability estimates of human body fatness vary widely and the contribution of body composition methodology to this variability is unknown. The effect of body composition methodology on estimations of genetic and environmental contributions to body fatness variation was examined in 78 adult male and female monozygotic twin pairs reared apart or together. Body composition was assessed by six methods – body mass index (BMI), dual energy x-ray absorptiometry (DXA), underwater weighing (UWW), total body water (TBW), bioelectric impedance (BIA), and skinfold thickness. Body fatness was expressed as percent body fat, fat mass, and fat mass/height2 to assess the effect of body fatness expression on heritability estimates. Model-fitting multivariate analyses were used to assess the genetic and environmental components of variance. Mean BMI was 24.5 kg/m2 (range of 17.8–43.4 kg/m2). There was a significant effect of body composition methodology (p<0.001) on heritability estimates, with UWW giving the highest estimate (69%) and BIA giving the lowest estimate (47%) for fat mass/height2. Expression of body fatness as percent body fat resulted in significantly higher heritability estimates (on average 10.3% higher) compared to expression as fat mass/height2 (p=0.015). DXA and TBW methods expressing body fatness as fat mass/height2 gave the least biased heritability assessments, based on the small contribution of specific genetic factors to their genetic variance. A model combining DXA and TBW methods resulted in a relatively low FM/ht2 heritability estimate of 60%, and significant contributions of common and unique environmental factors (22% and 18%, respectively). The body fatness heritability estimate of 60% indicates a smaller contribution of genetic variance to total variance than many previous studies using less powerful research designs have indicated. The results also highlight the importance of environmental factors and possibly genotype by environmental interactions in the etiology of weight gain and the obesity epidemic. PMID:25067962
NASA Astrophysics Data System (ADS)
Dion, Lukas; Gaboury, Simon; Picard, Frédéric; Kiss, Laszlo I.; Poncsak, Sandor; Morais, Nadia
2018-04-01
Recent investigations on aluminum electrolysis cell demonstrated limitations to the commonly used tier-3 slope methodology to estimate perfluorocarbon (PFC) emissions from high-voltage anode effects (HVAEs). These limitations are greater for smelters with a reduced HVAE frequency. A novel approach is proposed to estimate the specific emissions using a tier 2 model resulting from individual HVAE instead of estimating monthly emissions for pot lines with the slope methodology. This approach considers the nonlinear behavior of PFC emissions as a function of the polarized anode effect duration but also integrates the change in behavior attributed to cell productivity. Validation was performed by comparing the new approach and the slope methodology with measurement campaigns from different smelters. The results demonstrate a good agreement between measured and estimated emissions as well as more accurately reflect individual HVAE dynamics occurring over time. Finally, the possible impact of this approach for the aluminum industry is discussed.
The estimation of the measurement results with using statistical methods
NASA Astrophysics Data System (ADS)
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Doubly robust matching estimators for high dimensional confounding adjustment.
Antonelli, Joseph; Cefalu, Matthew; Palmer, Nathan; Agniel, Denis
2018-05-11
Valid estimation of treatment effects from observational data requires proper control of confounding. If the number of covariates is large relative to the number of observations, then controlling for all available covariates is infeasible. In cases where a sparsity condition holds, variable selection or penalization can reduce the dimension of the covariate space in a manner that allows for valid estimation of treatment effects. In this article, we propose matching on both the estimated propensity score and the estimated prognostic scores when the number of covariates is large relative to the number of observations. We derive asymptotic results for the matching estimator and show that it is doubly robust in the sense that only one of the two score models need be correct to obtain a consistent estimator. We show via simulation its effectiveness in controlling for confounding and highlight its potential to address nonlinear confounding. Finally, we apply the proposed procedure to analyze the effect of gender on prescription opioid use using insurance claims data. © 2018, The International Biometric Society.
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto
2018-03-29
Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user's experience and sensibility. The purpose of this study was to quantify the impact of user's subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t -test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies.
Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto
2018-01-01
Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user’s experience and sensibility. The purpose of this study was to quantify the impact of user’s subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t-test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies. PMID:29596376
How Much Can Remotely-Sensed Natural Resource Inventories Benefit from Finer Spatial Resolutions?
NASA Astrophysics Data System (ADS)
Hou, Z.; Xu, Q.; McRoberts, R. E.; Ståhl, G.; Greenberg, J. A.
2017-12-01
For remote sensing facilitated natural resource inventories, the effects of spatial resolution in the form of pixel size and the effects of subpixel information on estimates of population parameters were evaluated by comparing results obtained using Landsat 8 and RapidEye auxiliary imagery. The study area was in Burkina Faso, and the variable of interest was the stem volume (m3/ha) convertible to the woodland aboveground biomass. A sample consisting of 160 field plots was selected and measured from the population following a two-stage sampling design. Models were fit using weighted least squares; the population mean, mu, and the variance of the estimator of the population mean, Var(mu.hat), were estimated in two inferential frameworks, model-based and model-assisted, and compared; for each framework, Var(mu.hat) was estimated both analytically and empirically. Empirical variances were estimated with bootstrapping that for resampling takes clustering effects into account. The primary results were twofold. First, for the effects of spatial resolution and subpixel information, four conclusions are relevant: (1) finer spatial resolution imagery indeed contributes to greater precision for estimators of population parameter, but this increase is slight at a maximum rate of 20% considering that RapidEye data are 36 times finer resolution than Landsat 8 data; (2) subpixel information on texture is marginally beneficial when it comes to making inference for population of large areas; (3) cost-effectiveness is more favorable for the free of charge Landsat 8 imagery than RapidEye imagery; and (4) for a given plot size, candidate remote sensing auxiliary datasets are more cost-effective when their spatial resolutions are similar to the plot size than with much finer alternatives. Second, for the comparison between estimators, three conclusions are relevant: (1) model-based variance estimates are consistent with each other and about half as large as stabilized model-assisted estimates, suggesting superior effectiveness of model-based inference to model-assisted inference; (2) bootstrapping is an effective alternative to analytical variance estimators; and (3) prediction accuracy expressed by RMSE is useful for screening candidate models to be used for population inferences.
van Assen, Marcel A L M; van Aert, Robbie C M; Nuijten, Michèle B; Wicherts, Jelte M
2014-01-01
De Winter and Happee examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that "selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective" (p.4). Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Publishing everything is more effective than only reporting significant outcomes.
van Assen, Marcel A. L. M.; van Aert, Robbie C. M.; Nuijten, Michèle B.; Wicherts, Jelte M.
2014-01-01
Background De Winter and Happee [1] examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that “selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective” (p.4). Methods and Findings Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Conclusion Publishing everything is more effective than only reporting significant outcomes. PMID:24465448
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buhl, T.E.; Hansen, W.R.
1984-05-01
Estimators for calculating the risk of cancer and genetic disorders induced by exposure to ionizing radiation have been recommended by the US National Academy of Sciences Committee on the Biological Effects of Ionizing Radiations, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Committee on Radiological Protection. These groups have also considered the risks of somatic effects other than cancer. The US National Council on Radiation Protection and Measurements has discussed risk estimate procedures for radiation-induced health effects. The recommendations of these national and international advisory committees are summarized and compared in this report. Based onmore » this review, two procedures for risk estimation are presented for use in radiological assessments performed by the US Department of Energy under the National Environmental Policy Act of 1969 (NEPA). In the first procedure, age- and sex-averaged risk estimators calculated with US average demographic statistics would be used with estimates of radiation dose to calculate the projected risk of cancer and genetic disorders that would result from the operation being reviewed under NEPA. If more site-specific risk estimators are needed, and the demographic information is available, a second procedure is described that would involve direct calculation of the risk estimators using recommended risk-rate factors. The computer program REPCAL has been written to perform this calculation and is described in this report. 25 references, 16 tables.« less
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.
2017-01-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates.
Kostich, Mitchell S; Flick, Robert W; Batt, Angela L; Mash, Heath E; Boone, J Scott; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T
2017-02-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Tew, W. L.
2008-02-01
The sensitivities of melting temperatures to isotopic variations in monatomic and diatomic atmospheric gases using both theoretical and semi-empirical methods are estimated. The current state of knowledge of the vapor-pressure isotope effects (VPIE) and triple-point isotope effects (TPIE) is briefly summarized for the noble gases (except He), and for selected diatomic molecules including oxygen. An approximate expression is derived to estimate the relative shift in the melting temperature with isotopic substitution. In general, the magnitude of the effects diminishes with increasing molecular mass and increasing temperature. Knowledge of the VPIE, molar volumes, and heat of fusion are sufficient to estimate the temperature shift or isotopic sensitivity coefficient via the derived expression. The usefulness of this approach is demonstrated in the estimation of isotopic sensitivities and uncertainties for triple points of xenon and molecular oxygen for which few documented estimates were previously available. The calculated sensitivities from this study are considerably higher than previous estimates for Xe, and lower than other estimates in the case of oxygen. In both these cases, the predicted sensitivities are small and the resulting variations in triple point temperatures due to mass fractionation effects are less than 20 μK.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
ERIC Educational Resources Information Center
Olneck, Michael R.
Sibling data drawn from the Kalamazoo Brothers sample are used in this report to assess the adequacy of conventional sociological variables for measuring family background, to estimate the overall effects of family background on test scores, education, occupational status, and earning, and to control family background when estimating the effects…
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
The effects of clutter-rejection filtering on estimating weather spectrum parameters
NASA Technical Reports Server (NTRS)
Davis, W. T.
1989-01-01
The effects of clutter-rejection filtering on estimating the weather parameters from pulse Doppler radar measurement data are investigated. The pulse pair method of estimating the spectrum mean and spectrum width of the weather is emphasized. The loss of sensitivity, a measure of the signal power lost due to filtering, is also considered. A flexible software tool developed to investigate these effects is described. It allows for simulated weather radar data, in which the user specifies an underlying truncated Gaussian spectrum, as well as for externally generated data which may be real or simulated. The filter may be implemented in either the time or the frequency domain. The software tool is validated by comparing unfiltered spectrum mean and width estimates to their true values, and by reproducing previously published results. The effects on the weather parameter estimates using simulated weather-only data are evaluated for five filters: an ideal filter, two infinite impulse response filters, and two finite impulse response filters. Results considering external data, consisting of weather and clutter data, are evaluated on a range cell by range cell basis. Finally, it is shown theoretically and by computer simulation that a linear phase response is not required for a clutter rejection filter preceeding pulse-pair parameter estimation.
Chapman, Cole G; Brooks, John M
2016-12-01
To examine the settings of simulation evidence supporting use of nonlinear two-stage residual inclusion (2SRI) instrumental variable (IV) methods for estimating average treatment effects (ATE) using observational data and investigate potential bias of 2SRI across alternative scenarios of essential heterogeneity and uniqueness of marginal patients. Potential bias of linear and nonlinear IV methods for ATE and local average treatment effects (LATE) is assessed using simulation models with a binary outcome and binary endogenous treatment across settings varying by the relationship between treatment effectiveness and treatment choice. Results show that nonlinear 2SRI models produce estimates of ATE and LATE that are substantially biased when the relationships between treatment and outcome for marginal patients are unique from relationships for the full population. Bias of linear IV estimates for LATE was low across all scenarios. Researchers are increasingly opting for nonlinear 2SRI to estimate treatment effects in models with binary and otherwise inherently nonlinear dependent variables, believing that it produces generally unbiased and consistent estimates. This research shows that positive properties of nonlinear 2SRI rely on assumptions about the relationships between treatment effect heterogeneity and choice. © Health Research and Educational Trust.
Pham, Ba'; Klassen, Terry P; Lawson, Margaret L; Moher, David
2005-08-01
To assess whether language of publication restrictions impact the estimates of an intervention's effectiveness, whether such impact is similar for conventional medicine and complementary medicine interventions, and whether the results are influenced by publication bias and statistical heterogeneity. We set out to examine the extent to which including reports of randomized controlled trials (RCTs) in languages other than English (LOE) influences the results of systematic reviews, using a broad dataset of 42 language-inclusive systematic reviews, involving 662 RCTs, including both conventional medicine (CM) and complementary and alternative medicine (CAM) interventions. For CM interventions, language-restricted systematic reviews, compared with language-inclusive ones, did not introduce biased results, in terms of estimates of intervention effectiveness (random effects ration of odds rations ROR=1.02; 95% CI=0.83-1.26). For CAM interventions, however, language-restricted systematic reviews resulted in a 63% smaller protective effect estimate than language-inclusive reviews (random effects ROR=1.63; 95% CI=1.03-2.60). Language restrictions do not change the results of CM systematic reviews but do substantially alter the results of CAM systematic reviews. These findings are robust even after sensitivity analyses, and do not appear to be influenced by statistical heterogeneity and publication bias.
ERIC Educational Resources Information Center
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
ERIC Educational Resources Information Center
Igra, Amnon
1980-01-01
Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)
Revisiting the Table 2 fallacy: A motivating example examining preeclampsia and preterm birth.
Bandoli, Gretchen; Palmsten, Kristin; Chambers, Christina D; Jelliffe-Pawlowski, Laura L; Baer, Rebecca J; Thompson, Caroline A
2018-05-21
A "Table Fallacy," as coined by Westreich and Greenland, reports multiple adjusted effect estimates from a single model. This practice, which remains common in published literature, can be problematic when different types of effect estimates are presented together in a single table. The purpose of this paper is to quantitatively illustrate this potential for misinterpretation with an example estimating the effects of preeclampsia on preterm birth. We analysed a retrospective population-based cohort of 2 963 888 singleton births in California between 2007 and 2012. We performed a modified Poisson regression to calculate the total effect of preeclampsia on the risk of PTB, adjusting for previous preterm birth. pregnancy alcohol abuse, maternal education, and maternal socio-demographic factors (Model 1). In subsequent models, we report the total effects of previous preterm birth, alcohol abuse, and education on the risk of PTB, comparing and contrasting the controlled direct effects, total effects, and confounded effect estimates, resulting from Model 1. The effect estimate for previous preterm birth (a controlled direct effect in Model 1) increased 10% when estimated as a total effect. The risk ratio for alcohol abuse, biased due to an uncontrolled confounder in Model 1, was reduced by 23% when adjusted for drug abuse. The risk ratio for maternal education, solely a predictor of the outcome, was essentially unchanged. Reporting multiple effect estimates from a single model may lead to misinterpretation and lack of reproducibility. This example highlights the need for careful consideration of the types of effects estimated in statistical models. © 2018 John Wiley & Sons Ltd.
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahbaee, Pooyan, E-mail: psahbae@ncsu.edu; Segars, W. Paul; Samei, Ehsan
2014-07-15
Purpose: This study aimed to provide a comprehensive patient-specific organ dose estimation across a multiplicity of computed tomography (CT) examination protocols. Methods: A validated Monte Carlo program was employed to model a common CT system (LightSpeed VCT, GE Healthcare). The organ and effective doses were estimated from 13 commonly used body and neurological CT examination. The dose estimation was performed on 58 adult computational extended cardiac-torso phantoms (35 male, 23 female, mean age 51.5 years, mean weight 80.2 kg). The organ dose normalized by CTDI{sub vol} (h factor) and effective dose normalized by the dose length product (DLP) (k factor)more » were calculated from the results. A mathematical model was derived for the correlation between the h and k factors with the patient size across the protocols. Based on this mathematical model, a dose estimation iPhone operating system application was designed and developed to be used as a tool to estimate dose to the patients for a variety of routinely used CT examinations. Results: The organ dose results across all the protocols showed an exponential decrease with patient body size. The correlation was generally strong for the organs which were fully or partially located inside the scan coverage (Pearson sample correlation coefficient (r) of 0.49). The correlation was weaker for organs outside the scan coverage for which distance between the organ and the irradiation area was a stronger predictor of dose to the organ. For body protocols, the effective dose before and after normalization by DLP decreased exponentially with increasing patient's body diameter (r > 0.85). The exponential relationship between effective dose and patient's body diameter was significantly weaker for neurological protocols (r < 0.41), where the trunk length was a slightly stronger predictor of effective dose (0.15 < r < 0.46). Conclusions: While the most accurate estimation of a patient dose requires specific modeling of the patient anatomy, a first order approximation of organ and effective doses from routine CT scan protocols can be reasonably estimated using size specific factors. Estimation accuracy is generally poor for organ outside the scan range and for neurological protocols. The dose calculator designed in this study can be used to conveniently estimate and report the dose values for a patient across a multiplicity of CT scan protocols.« less
Exercise and insulin resistance in youth: a meta-analysis.
Fedewa, Michael V; Gist, Nicholas H; Evans, Ellen M; Dishman, Rod K
2014-01-01
The prevalence of obesity and diabetes is increasing among children, adolescents, and adults. Although estimates of the efficacy of exercise training on fasting insulin and insulin resistance have been provided, for adults similar estimates have not been provided for youth. This systematic review and meta-analysis provides a quantitative estimate of the effectiveness of exercise training on fasting insulin and insulin resistance in children and adolescents. Potential sources were limited to peer-reviewed articles published before June 25, 2013, and gathered from the PubMed, SPORTDiscus, Physical Education Index, and Web of Science online databases. Analysis was limited to randomized controlled trials by using combinations of the terms adolescent, child, pediatric, youth, exercise training, physical activity, diabetes, insulin, randomized trial, and randomized controlled trial. The authors assessed 546 sources, of which 4.4% (24 studies) were eligible for inclusion. Thirty-two effects were used to estimate the effect of exercise training on fasting insulin, with 15 effects measuring the effect on insulin resistance. Estimated effects were independently calculated by multiple authors, and conflicts were resolved before calculating the overall effect. Based on the cumulative results from these studies, a small to moderate effect was found for exercise training on fasting insulin and improving insulin resistance in youth (Hedges' d effect size = 0.48 [95% confidence interval: 0.22-0.74], P < .001 and 0.31 [95% confidence interval: 0.06-0.56], P < .05, respectively). These results support the use of exercise training in the prevention and treatment of type 2 diabetes.
Assessment of ecologic regression in the study of lung cancer and indoor radon.
Stidley, C A; Samet, J M
1994-02-01
Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.
Foppa, Ivo M; Ferdinands, Jill M; Chaves, Sandra S; Haber, Michael J; Reynolds, Sue B; Flannery, Brendan; Fry, Alicia M
2016-12-01
The test-negative design (TND) to evaluate influenza vaccine effectiveness is based on patients seeking care for acute respiratory infection, with those who test positive for influenza as cases and the test-negatives serving as controls. This design has not been validated for the inpatient setting where selection bias might be different from an outpatient setting. We derived mathematical expressions for vaccine effectiveness (VE) against laboratory-confirmed influenza hospitalizations and used numerical simulations to verify theoretical results exploring expected biases under various scenarios. We explored meaningful interpretations of VE estimates from inpatient TND studies. VE estimates from inpatient TND studies capture the vaccine-mediated protection of the source population against laboratory-confirmed influenza hospitalizations. If vaccination does not modify disease severity, these estimates are equivalent to VE against influenza virus infection. If chronic cardiopulmonary individuals are enrolled because of non-infectious exacerbation, biased VE estimates (too high) will result. If chronic cardiopulmonary disease status is adjusted for accurately, the VE estimates will be unbiased. If chronic cardiopulmonary illness cannot be adequately be characterized, excluding these individuals may provide unbiased VE estimates. The inpatient TND offers logistic advantages and can provide valid estimates of influenza VE. If highly vaccinated patients with respiratory exacerbation of chronic cardiopulmonary conditions are eligible for study inclusion, biased VE estimates will result unless this group is well characterized and the analysis can adequately adjust for it. Otherwise, such groups of subjects should be excluded from the analysis. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.
Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.
ERIC Educational Resources Information Center
Rowell, R. Kevin
In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
Price elasticity reconsidered: Panel estimation of an agricultural water demand function
NASA Astrophysics Data System (ADS)
Schoengold, Karina; Sunding, David L.; Moreno, Georgina
2006-09-01
Using panel data from a period of water rate reform, this paper estimates the price elasticity of irrigation water demand. Price elasticity is decomposed into the direct effect of water management and the indirect effect of water price on choice of output and irrigation technology. The model is estimated using an instrumental variables strategy to account for the endogeneity of technology and output choices in the water demand equation. Estimation results indicate that the price elasticity of agricultural water demand is -0.79, which is greater than that found in previous studies.
Thompson, W.L.
2003-01-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.
Burgess, Stephen; Daniel, Rhian M; Butterworth, Adam S; Thompson, Simon G
2015-01-01
Background: Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation. Methods: We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid. Results: These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates. Conclusions: These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes. PMID:25150977
May, Peter; Garrido, Melissa M; Cassel, J Brian; Morrison, R Sean; Normand, Charles
2016-10-01
To evaluate the sensitivity of treatment effect estimates when length of stay (LOS) is used to control for unobserved heterogeneity when estimating treatment effect on cost of hospital admission with observational data. We used data from a prospective cohort study on the impact of palliative care consultation teams (PCCTs) on direct cost of hospital care. Adult patients with an advanced cancer diagnosis admitted to five large medical and cancer centers in the United States between 2007 and 2011 were eligible for this study. Costs were modeled using generalized linear models with a gamma distribution and a log link. We compared variability in estimates of PCCT impact on hospitalization costs when LOS was used as a covariate, as a sample parameter, and as an outcome denominator. We used propensity scores to account for patient characteristics associated with both PCCT use and total direct hospitalization costs. We analyzed data from hospital cost databases, medical records, and questionnaires. Our propensity score weighted sample included 969 patients who were discharged alive. In analyses of hospitalization costs, treatment effect estimates are highly sensitive to methods that control for LOS, complicating interpretation. Both the magnitude and significance of results varied widely with the method of controlling for LOS. When we incorporated intervention timing into our analyses, results were robust to LOS-controls. Treatment effect estimates using LOS-controls are not only suboptimal in terms of reliability (given concerns over endogeneity and bias) and usefulness (given the need to validate the cost-effectiveness of an intervention using overall resource use for a sample defined at baseline) but also in terms of robustness (results depend on the approach taken, and there is little evidence to guide this choice). To derive results that minimize endogeneity concerns and maximize external validity, investigators should match and analyze treatment and comparison arms on baseline factors only. Incorporating intervention timing may deliver results that are more reliable, more robust, and more useful than those derived using LOS-controls. © Health Research and Educational Trust.
Adjustment to time of use pricing: Persistence of habits or change
NASA Astrophysics Data System (ADS)
Rebello, Derrick Michael
1999-11-01
Generally the dynamics related to residential electricity consumption under TOU rates have not been analyzed completely. A habit persistence model is proposed to account for the dynamics that may be present as a result of recurring habits or lack of information about the effects of shifting load across TOU periods. In addition, the presence of attrition bias necessitated a two-step estimation approach. The decision to remain in the program modeled in the first-step, while demand for electricity was estimated in the second-step. Results show that own-price effects and habit persistence have the most significant effect the model. The habit effects, which while small in absolute terms, are significant. Elasticity estimates show that electricity consumption is inelastic during all periods of the day. Estimates of the long-run elasticities were nearly identical to short-run estimates, showing little or no adjustment across time. Cross-price elasticities indicate a willingness to substitute consumption across periods implying that TOU goods are weak substitutes. The most significant substitution occurs during the period of 5:00 PM to 9:00 PM, when most individuals are likely to be home and active.
ERIC Educational Resources Information Center
Mueller, Christoph Emanuel; Gaus, Hansjoerg
2015-01-01
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom P.
In its 2012 report NHTSA simulated the effect four fleetwide mass reduction scenarios would have on the change in annual fatalities. NHTSA estimated that the most aggressive of these scenarios (reducing mass 5.2% in heavier light trucks and 2.6% in all other vehicles types except lighter cars) would result in a small reduction in societal fatalities. LBNL replicated the methodology NHTSA used to simulate six mass reduction scenarios, including the mass reductions recommended in the 2015 NRC committee report, and estimated in 2021 and 2025 by EPA in the TAR, using the updated data through 2012. The analysis indicates thatmore » the estimated x change in fatalities under each scenario based on the updated analysis is comparable to that in the 2012 analysis, but less beneficial or more detrimental than that in the 2016 analysis. For example, an across the board 100-lb reduction in mass would result in an estimated 157 additional annual fatalities based on the 2012 analysis, but would result in only an estimated 91 additional annual fatalities based on the 2016 analysis, and an additional 87 fatalities based on the current analysis. The mass reductions recommended by the 2015 NRC committee report6 would result in a 224 increase in annual fatalities in the 2012 analysis, a 344 decrease in annual fatalities in the 2016 analysis, and a 141 increase in fatalities in the current analysis. The mass reductions EPA estimated for 2025 in the TAR7 would result in a 203 decrease in fatalities based on the 2016 analysis, but an increase of 39 fatalities based on the current analysis. These results support NHTSA’s conclusion from its 2012 study that, when footprint is held fixed, “no judicious combination of mass reductions in the various classes of vehicles results in a statistically significant fatality increase and many potential combinations are safety-neutral as point estimates.”Like the previous NHTSA studies, this updated report concludes that the estimated effect of mass reduction while maintaining footprint on societal U.S. fatality risk is small, and not statistically significant at the 95% or 90% confidence level for all vehicle types based on the jack-knife method NHTSA used. This report also finds that the estimated effects of other control variables, such as vehicle type, specific safety technologies, and crash conditions such as whether the crash occurred at night, in a rural county, or on a high-speed road, on risk are much larger, in some cases two orders of magnitude larger, than the estimated effect of mass or footprint reduction on risk. Finally, this report shows that after accounting for the many vehicle, driver, and crash variables NHTSA used in its regression analyses, there remains a wide variation in risk by vehicle make and model, and this variation is unrelated to vehicle mass. Although the purpose of the NHTSA and LBNL reports is to estimate the effect of vehicle mass reduction on societal risk, this is not exactly what the regression models are estimating. Rather, they are estimating the recent historical relationship between mass and risk, after accounting for most measurable differences between vehicles, drivers, and crash times and locations. In essence, the regression models are comparing the risk of a 2600-lb Dodge Neon with that of a 2500-lb Honda Civic, after attempting to account for all other differences between the two vehicles. The models are not estimating the effect of literally removing 100 pounds from the Neon, leaving everything else unchanged. In addition, the analyses are based on the relationship of vehicle mass and footprint on risk for recent vehicle designs (model year 2004 to 2011). These relationships may or may not continue into the future as manufacturers utilize new vehicle designs and incorporate new technologies, such as more extensive use of strong lightweight materials and specific safety technologies. Therefore, throughout this report we use the phrase “the estimated effect of mass (or footprint) reduction on risk” as shorthand for “the estimated change in risk as a function of its relationship to mass (or footprint) for vehicle models of recent design.”« less
Do children perceive postural constraints when estimating reach or action planning?
Gabbard, Carl; Cordova, Alberto; Lee, Sunghan
2009-03-01
Estimation of whether an object is reachable from a specific body position constitutes an important aspect in effective motor planning. Researchers who estimate reachability by way of motor imagery with adults consistently report the tendency to overestimate, with some evidence of a postural effect (postural stability hypothesis). This idea suggests that perceived reaching limits depend on an individual's perceived postural constraints. Based on previous work with adults, the authors expected a significant postural effect with the Reach 2 condition, as evidenced by reduced overestimation. Furthermore, the authors hypothesized that the postural effect would be greater in younger children. They then tested these propositions among children aged 7, 9, and 11 years by asking them to estimate reach while seated (Reach 1) and in the more demanding posture of standing on 1 foot and leaning forward (Reach 2). Results indicated no age or condition difference, therefore providing no support for a postural effect. When the authors compared these data to a published report of adults, a developmental difference emerged. That is, adults recognize the perceived postural constraint of the standing position resulting in under- rather than overestimation, as displayed in the seated condition. Although preliminary, these observations suggest that estimates of reach (action planning) continue to be refined between late childhood and young adulthood.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
How BenMAP-CE Estimates the Health and Economic Effects of Air Pollution
The BenMAP-CE tool estimates the number and economic value of health impacts resulting from changes in air quality - specifically, ground-level ozone and fine particles. Learn what data BenMAP-CE uses and how the estimates are calculated.
Robbins, A; Robbins, G
1992-01-01
Cost estimates of health care policy changes are extremely important. Historically, however, the US government has done a poor job in projecting the actual cost of new health care programmes. These projections have been inaccurate primarily because government forecasters use 'static' methods that fail to incorporate the change in people's behaviour as a direct result of a new policy. In contrast, 'dynamic' forecasts incorporate the behavioural effects of policy changes on individuals and the economy. Static and dynamic estimates can lead to different results for 4 areas of US health policy: (a) the Medicare Catastrophic Coverage Act; (b) mandated health benefits; (c) health insurance tax subsidies; and (d) national health insurance. Improving health care policy requires the adoption of dynamic estimation practices, periodic appraisals evaluating the accuracy of official estimates in relation to actual experience, and clear presentation of proposed policy changes and estimates to policymakers and the general public.
Myers, Teresa A.; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents’ estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an “estimation and reveal” technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed. PMID:25812121
Myers, Teresa A; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents' estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an "estimation and reveal" technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed.
Quantification of effective plant rooting depth: advancing global hydrological modelling
NASA Astrophysics Data System (ADS)
Yang, Y.; Donohue, R. J.; McVicar, T.
2017-12-01
Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.
Surface term effects on mass estimators
NASA Astrophysics Data System (ADS)
Membrado, M.; Pacheco, A. F.
2016-05-01
Context. We propose a way of estimating the mass contained in the volume occupied by a sample of galaxies in a virialized system. Aims: We analyze the influence of surface effects and the contribution of the cosmological constant terms on our mass estimations of galaxy systems. Methods: We propose two equations that contain surface terms to estimate galaxy sample masses. When the surface terms are neglected, these equations provide the so-called virial and projected masses. Both equations lead to a single equation that allows sample masses to be estimated without the need for calculating surface terms. Sample masses for some nearest galaxy groups are estimated and compared with virialized masses determined from turn-around radii and results of a spherical infall model. Results: Surface effects have a considerable effect on the mass estimations of the studied galaxy groups. According to our results, they lead sample masses of some groups to being less than half the virial mass estimations and even less than 10% of projected mass estimations. However, the contributions of cosmological constant terms to mass estimations are smaller than 2% for the majority of the virialized groups studied. Our estimations are in agreement with virialized masses calculated from turn-around radii. Virialized masses for complexes were found to be: (8.9 ± 2.8) × 1011 M⊙ for the Milky Way - M 31; (12.5 ± 2.5) × 1011 M⊙ for M 81 - NGC 2403; (21.5 ± 7.7) × 1011 M⊙. for Cantaurs A - M 83; and (7.9 ± 2.6) × 1011 M⊙. for IC 324 - Maffei. Conclusions: The nearest galaxy groups located inside a sphere of 5 Mpc have been addressed to explore the performance of our mass estimator. We have seen that surface effects make mass estimations of galaxy groups rather smaller than both virial and projected masses. In mass calculations, cosmological constant terms can be neglected; nevertheless, the collapse of cold dark matter leading to virialized structures is strongly affected by the cosmological constant. We have also seen that, if mass density were proportional to luminosity density on different scales in the Universe, the 5 Mpc sphere would have a mean density close to that of the sphere region containing galaxies and systems of galaxies; thus, the rest of the sphere could contain regions of low-mass dark halos with similar mass density. This mass density would be about 4.5 times greater than that of the matter background of the Universe at present.
Commentary: Are Three Waves of Data Sufficient for Assessing Mediation?
ERIC Educational Resources Information Center
Reichardt, Charles S.
2011-01-01
Maxwell, Cole, and Mitchell (2011) demonstrated that simple structural equation models, when used with cross-sectional data, generally produce biased estimates of meditated effects. I extend those results by showing how simple structural equation models can produce biased estimates of meditated effects when used even with longitudinal data. Even…
Moss, Brian G; Yeaton, William H
2013-10-01
Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Adaptive control of theophylline therapy: importance of blood sampling times.
D'Argenio, D Z; Khakmahd, K
1983-10-01
A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.
Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie
2017-09-01
Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Potocki, J K; Tharp, H S
1993-01-01
Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.
LANDSAT (MSS): Image demographic estimations
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Foresti, C.
1977-01-01
The author has identified the following significant results. Two sets of urban test sites, one with 35 cities and one with 70 cities, were selected in the State, Sao Paulo. A high degree of colinearity (0.96) was found between urban and areal measurements taken from aerial photographs and LANDSAT MSS imagery. High coefficients were observed when census data were regressed against aerial information (0.95) and LANDSAT data (0.92). The validity of population estimations was tested by regressing three urban variables, against three classes of cities. Results supported the effectiveness of LANDSAT to estimate large city populations with diminishing effectiveness as urban areas decrease in size.
Image enhancement and advanced information extraction techniques for ERTS-1 data
NASA Technical Reports Server (NTRS)
Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.
1975-01-01
The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.
Aquatic concentrations of chemical analytes compared to ...
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc
Overcoming bias in estimating the volume-outcome relationship.
Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D
2006-02-01
To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.
Effective record length for the T-year event
Tasker, Gary D.
1983-01-01
The effect of serial dependence on the reliability of an estimate of the T-yr. event is of importance in hydrology because design decisions are based upon the estimate. In this paper the reliability of estimates of the T-yr. event from two common distributions is given as a function of number of observations and lag-one serial correlation coefficient for T = 2, 10, 20, 50, and 100 yr. A lag-one autoregressive model is assumed with either a normal or Pearson Type-III disturbance term. Results indicate that, if observations are serially correlated, the effective record length should be used to estimate the discharge associated with the expected exceedance probability. ?? 1983.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, K; Wong, M; Ng, Y
Purpose: Interventional cardiac procedures utilize frequent fluoroscopy and cineangiography, which impose considerable radiation risk to patients, especially pediatric patients. Accurate calculation of effective dose is important in order to estimate cancer risk over the rest of their lifetime. This study evaluates the difference in effective dose calculated by Monte Carlo simulation with those estimated by locally-derived conversion factors (CF-local) and by commonly quoted conversion factors from Karambatsakidou et al (CF-K). Methods: Effective dose (E),of 12 pediatric patients, age between 2.5–19 years old, who had undergone interventional cardiac procedures, were calculated using PCXMC-2.0 software. Tube spectrum, irradiation geometry, exposure parameters andmore » dose-area product (DAP) of each projection were included in the software calculation. Effective doses for each patient were also estimated by two Methods: 1) CF-local: conversion factor derived locally by generalizing results of 12 patients, multiplied by DAP of each patient gives E-local. 2) CF-K: selected factor from above-mentioned literature, multiplied by DAP of each patient gives E-K. Results: Mean of E, E-local and E-K were 16.01 mSv, 16.80 mSv and 22.25 mSv respectively. A deviation of −29.35% to +34.85% between E and E-local, while a greater deviation of −28.96% to +60.86% between E and EK were observed. E-K overestimated the effective dose for patients at age 7.5–19. Conclusion: Effective dose obtained by conversion factors is simple and quick to estimate radiation risk of pediatric patients. This study showed that estimation by CF-local may bear an error of 35% when compared with Monte Carlo calculation. If using conversion factors derived by other studies may result in an even greater error, of up to 60%, due to factors that are not catered for in the estimation, including patient size, projection angles, exposure parameters, tube filtration, etc. Users must be aware of these potential inaccuracies when simple conversion method is employed.« less
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
Updated estimates of HAL n and RN- effects on pork quality: fresh and processed loin and ham.
Cherel, P; Glénisson, J; Figwer, P; Pires, J; Damon, M; Franck, M; Le Roy, P
2010-12-01
A 1000-pig F2 intercross QTL detection experimental population was generated using two commercial sire lines. Independent carriers of HAL n and RN- mutations (10% and 14%, respectively) were included in this population as control genotypes. The effects of HAL n and RN- heterozygous genotypes on fresh and transformed loins and hams were estimated using a mixed model methodology. The results document the unfavorable effects of both mutations on meat quality. Smaller effects of HAL Nn genotype compared to HAL nn or RN-rn+ genotypes were estimated. Interestingly, effects of HAL Nn genotype on meat pH and loin color could be insignificant at 24-h postmortem, but translate into higher water losses on storage and cooking, and result in tougher cooked loin. Using the same methodology, significant effects of the PRKAG3 (RN) I199 allele on ultimate pH values but not on glycolytic potential were observed. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
Bergmann, Julie N; Legins, Kenneth; Sint, Tin Tin; Snidal, Sarah; Amor, Yanis Ben; McCord, Gordon C
2017-03-01
This paper provides the first estimates of impact and cost-effectiveness for integrated HIV and nutrition service delivery in sub-Saharan Africa. HIV and undernutrition are synergistic co-epidemics impacting millions of children throughout the region. To alleviate this co-epidemic, UNICEF supported small-scale pilot programs in Malawi and Mozambique that integrated HIV and nutrition service delivery. We use trends from integration sites and comparison sites to estimate the number of lives saved, infections averted and/or undernutrition cases cured due to programmatic activities, and to estimate cost-effectiveness. Results suggest that Malawi's program had a cost-effectiveness of $11-29/DALY, while Mozambique's was $16-59/DALY. Some components were more effective than others ($1-4/DALY for Malawi's Male motivators vs. $179/DALY for Mozambique's One stop shops). These results suggest that integrating HIV and nutrition programming leads to a positive impact on health outcomes and should motivate additional work to evaluate impact and determine cost-effectiveness using an appropriate research design.
Universal HIV screening of pregnant women in England: cost effectiveness analysis.
Postma, M J; Beck, E J; Mandalia, S; Sherr, L; Walters, M D; Houweling, H; Jager, J C
1999-06-19
To estimate the cost effectiveness of universal, voluntary HIV screening of pregnant women in England. Cost effectiveness analysis. Cost estimates of caring for HIV positive children were based on the stage of HIV infection and calculated using data obtained from a London hospital between 1986 and 1996. These were combined with estimates of the health benefits and costs of antenatal screening so that the cost effectiveness of universal, voluntary antenatal screening for HIV infection in England could be estimated. Lifetime, direct costs of medical care of childhood HIV infection; life years gained as a result of the screening programme; net cost per life year gained for different pretest counselling costs; and different prevalence rates of pregnant women who were unaware that they were HIV positive. Estimated direct lifetime medical and social care costs of childhood HIV infection were pound178 300 using a 5% discount rate for time preference (1995-6 prices). In high prevalence areas screening pregnant women for HIV is estimated to be a cost effective intervention with a net cost of less than pound4000 for each life year gained. For areas with comparatively low prevalence rates, cost effectiveness could be less than pound20 000 per life year gained, depending on the number of pregnant women who are unaware that they are infected and local screening costs. Our results confirm recent recommendations that universal, voluntary antenatal HIV screening should be implemented in the London area. Serious consideration of the policy should be given for other areas in England depending on local prevalence and screening costs.
Ertefaie, Ashkan; Flory, James H; Hennessy, Sean; Small, Dylan S
2017-06-15
Instrumental variable (IV) methods provide unbiased treatment effect estimation in the presence of unmeasured confounders under certain assumptions. To provide valid estimates of treatment effect, treatment effect confounders that are associated with the IV (IV-confounders) must be included in the analysis, and not including observations with missing values may lead to bias. Missing covariate data are particularly problematic when the probability that a value is missing is related to the value itself, which is known as nonignorable missingness. In such cases, imputation-based methods are biased. Using health-care provider preference as an IV method, we propose a 2-step procedure with which to estimate a valid treatment effect in the presence of baseline variables with nonignorable missing values. First, the provider preference IV value is estimated by performing a complete-case analysis using a random-effects model that includes IV-confounders. Second, the treatment effect is estimated using a 2-stage least squares IV approach that excludes IV-confounders with missing values. Simulation results are presented, and the method is applied to an analysis comparing the effects of sulfonylureas versus metformin on body mass index, where the variables baseline body mass index and glycosylated hemoglobin have missing values. Our result supports the association of sulfonylureas with weight gain. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The Flynn Effect: A Meta-analysis
Trahan, Lisa; Stuebing, Karla K.; Hiscock, Merril K.; Fletcher, Jack M.
2014-01-01
The “Flynn effect” refers to the observed rise in IQ scores over time, resulting in norms obsolescence. Although the Flynn effect is widely accepted, most approaches to estimating it have relied upon “scorecard” approaches that make estimates of its magnitude and error of measurement controversial and prevent determination of factors that moderate the Flynn effect across different IQ tests. We conducted a meta-analysis to determine the magnitude of the Flynn effect with a higher degree of precision, to determine the error of measurement, and to assess the impact of several moderator variables on the mean effect size. Across 285 studies (N = 14,031) since 1951 with administrations of two intelligence tests with different normative bases, the meta-analytic mean was 2.31, 95% CI [1.99, 2.64], standard score points per decade. The mean effect size for 53 comparisons (N = 3,951) (excluding three atypical studies that inflate the estimates) involving modern (since 1972) Stanford-Binet and Wechsler IQ tests (2.93, 95% CI [2.3, 3.5], IQ points per decade) was comparable to previous estimates of about 3 points per decade, but not consistent with the hypothesis that the Flynn effect is diminishing. For modern tests, study sample (larger increases for validation research samples vs. test standardization samples) and order of administration explained unique variance in the Flynn effect, but age and ability level were not significant moderators. These results supported previous estimates of the Flynn effect and its robustness across different age groups, measures, samples, and levels of performance. PMID:24979188
NASA Astrophysics Data System (ADS)
Vosoughi, Ehsan; Javaherian, Abdolrahim
2018-01-01
Seismic inversion is a process performed to remove the effects of propagated wavelets in order to recover the acoustic impedance. To obtain valid velocity and density values related to subsurface layers through the inversion process, it is highly essential to perform reliable wavelet estimation such as cumulant matching approach. For this purpose, the seismic data were windowed in this work in such a way that two consecutive windows were only one sample apart. Also, we did not consider any fixed wavelet for any window and let the phase of each wavelet rotate in each sample in the window. Comparing the fourth order cumulant of the whitened trace and fourth-order moment of the all-pass operator in each window generated a cost function that should be minimized with a non-linear optimization method. In this regard, parameters effective on the estimation of the nonstationary mixed-phase wavelets were tested over the created nonstationary seismic trace at 0.82 s and 1.6 s. Besides, we compared the consequences of each parameter on estimated wavelets at two mentioned times. The parameters studied in this work are window length, taper type, the number of iteration, signal-to-noise ratio, bandwidth to central frequency ratio, and Q factor. The results show that applying the optimum values of the effective parameters, the average correlation of the estimated mixed-phase wavelets with the original ones is about 87%. Moreover, the effectiveness of the proposed approach was examined on a synthetic nonstationary seismic section with variable Q factor values alongside the time and offset axis. Eventually, the cumulant matching method was applied on a cross line of the migrated data from a 3D data set of an oilfield in the Persian Gulf. Also, the effect of the wrong Q estimation on the estimated mixed-phase wavelet was considered on the real data set. It is concluded that the accuracy of the estimated wavelet relied on the estimated Q and more than 10% error in the estimated value of Q is acceptable. Eventually, an 88% correlation was found between the estimated mixed-phase wavelets and the original ones for three horizons. The estimated wavelets applied to the data and the result of deconvolution processes was presented.
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
A Disaggregate Study of the Effect of Unemployment Rates on Labor Supply.
ERIC Educational Resources Information Center
Fleisher, Belton M.; Parsons, Donald O.
The project focuses on two loosely-related hypotheses regarding a contradiction existing in the results of previous research on the relationship between labor force participation and unemployment. This contradiction is the persistent tendency of the estimated effect of unemployment on labor force participation--and hence estimates of "hidden…
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
A novel measure of effect size for mediation analysis.
Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken
2018-06-01
Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment
NASA Technical Reports Server (NTRS)
Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.
1990-01-01
The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.
Propulsion of a fin whale (Balaenoptera physalus): why the fin whale is a fast swimmer.
Bose, N; Lien, J
1989-07-22
Measurements of an immature fin whale (Balaenoptera physalus), which died as a result of entrapment in fishing gear near Frenchmans Cove, Newfoundland (47 degrees 9' N, 55 degrees 25' W), were made to obtain estimates of volume and surface area of the animal. Detailed measurements of the flukes, both planform and sections, were also obtained. A strip theory was developed to calculate the hydrodynamic performance of the whale's flukes as an oscillating propeller. This method is based on linear, two-dimensional, small-amplitude, unsteady hydrofoil theory with correction factors used to account for the effects of finite span and finite amplitude motion. These correction factors were developed from theoretical results of large-amplitude heaving motion and unsteady lifting-surface theory. A model that makes an estimate of the effects of viscous flow on propeller performance was superimposed on the potential-flow results. This model estimates the drag of the hydrofoil sections by assuming that the drag is similar to that of a hydrofoil section in steady flow. The performance characteristics of the flukes of the fin whale were estimated by using this method. The effects of the different correction factors, and of the frictional drag of the fluke sections, are emphasized. Frictional effects in particular were found to reduce the hydrodynamic efficiency of the flukes significantly. The results are discussed and compared with the known characteristics of fin-whale swimming.
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Quantifying Cancer Risk from Radiation.
Keil, Alexander P; Richardson, David B
2017-12-06
Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.
ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’?
Polsky, Daniel; Manning, Willard G.
2011-01-01
Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment. PMID:22199462
Sarvet, Aaron L.; Wall, Melanie M.; Fink, David S.; Greene, Emily; Le, Aline; Boustead, Anne E.; Pacula, Rosalie Liccardo; Keyes, Katherine M.; Cerdá, Magdalena; Galea, Sandro
2018-01-01
Abstract Aims To conduct a systematic review and meta‐analysis of studies in order to estimate the effect of US medical marijuana laws (MMLs) on past‐month marijuana use prevalence among adolescents. Methods A total of 2999 papers from 17 literature sources were screened systematically. Eleven studies, developed from four ongoing large national surveys, were meta‐analyzed. Estimates of MML effects on any past‐month marijuana use prevalence from included studies were obtained from comparisons of pre–post MML changes in MML states to changes in non‐MML states over comparable time‐periods. These estimates were standardized and entered into a meta‐analysis model with fixed‐effects for each study. Heterogeneity among the study estimates by national data survey was tested with an omnibus F‐test. Estimates of effects on additional marijuana outcomes, of MML provisions (e.g. dispensaries) and among demographic subgroups were abstracted and summarized. Key methodological and modeling characteristics were also described. Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) guidelines were followed. Results None of the 11 studies found significant estimates of pre–post MML changes compared with contemporaneous changes in non‐MML states for marijuana use prevalence among adolescents. The meta‐analysis yielded a non‐significant pooled estimate (standardized mean difference) of −0.003 (95% confidence interval = −0.012, +0.007). Four studies compared MML with non‐MML states on pre‐MML differences and all found higher rates of past‐month marijuana use in MML states pre‐MML passage. Additional tests of specific MML provisions, of MML effects on additional marijuana outcomes and among subgroups generally yielded non‐significant results, although limited heterogeneity may warrant further study. Conclusions Synthesis of the current evidence does not support the hypothesis that US medical marijuana laws (MMLs) until 2014 have led to increases in adolescent marijuana use prevalence. Limited heterogeneity exists among estimates of effects of MMLs on other patterns of marijuana use, of effects within particular population subgroups and of effects of specific MML provisions. PMID:29468763
Improved method of step length estimation based on inverted pendulum model.
Zhao, Qi; Zhang, Boxue; Wang, Jingjing; Feng, Wenquan; Jia, Wenyan; Sun, Mingui
2017-04-01
Step length estimation is an important issue in areas such as gait analysis, sport training, or pedestrian localization. In this article, we estimate the step length of walking using a waist-worn wearable computer named eButton. Motion sensors within this device are used to record body movement from the trunk instead of extremities. Two signal-processing techniques are applied to our algorithm design. The direction cosine matrix transforms vertical acceleration from the device coordinates to the topocentric coordinates. The empirical mode decomposition is used to remove the zero- and first-order skew effects resulting from an integration process. Our experimental results show that our algorithm performs well in step length estimation. The effectiveness of the direction cosine matrix algorithm is improved from 1.69% to 3.56% while the walking speed increased.
Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L
2018-01-01
We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.
Manca, Andrea; Hawkins, Neil; Sculpher, Mark J
2005-05-01
In trial-based cost-effectiveness analysis baseline mean utility values are invariably imbalanced between treatment arms. A patient's baseline utility is likely to be highly correlated with their quality-adjusted life-years (QALYs) over the follow-up period, not least because it typically contributes to the QALY calculation. Therefore, imbalance in baseline utility needs to be accounted for in the estimation of mean differential QALYs, and failure to control for this imbalance can result in a misleading incremental cost-effectiveness ratio. This paper discusses the approaches that have been used in the cost-effectiveness literature to estimate absolute and differential mean QALYs alongside randomised trials, and illustrates the implications of baseline mean utility imbalance for QALY calculation. Using data from a recently conducted trial-based cost-effectiveness study and a micro-simulation exercise, the relative performance of alternative estimators is compared, showing that widely used methods to calculate differential QALYs provide incorrect results in the presence of baseline mean utility imbalance regardless of whether these differences are formally statistically significant. It is demonstrated that multiple regression methods can be usefully applied to generate appropriate estimates of differential mean QALYs and an associated measure of sampling variability, while controlling for differences in baseline mean utility between treatment arms in the trial. Copyright 2004 John Wiley & Sons, Ltd
Pieroni, Luca; Muzi, Giacomo; Quercia, Augusto; Lanari, Donatella; Rundo, Carmen; Minelli, Liliana; Salmasi, Luca; dell’Omo, Marco
2015-01-01
Objectives: We evaluated the effects of the Italian 2005 smoking ban in public places on the prevalence of smoking, quitting and cigarette consumption of young workers. Data and Methods: The dataset was obtained from non-computerized registers of medical examinations for a population of workers with apprenticeship contracts residing in the province of Viterbo, Italy, in the period 1996–2007. To estimate the effects of the ban, a segmented regression approach was used, exploiting the discontinuity introduced by the application of the law on apprentices’ smoking behavior. Results: It is estimated that the Italian smoking ban generally had no effect on smoking prevalence, quitting ratio, or cigarette consumption of apprentices. However, when the estimates were applied to subpopulations, significant effects were found: −1% in smoking prevalence, +2% in quitting, and −3% in smoking intensity of apprentices with at least a diploma. PMID:26287220
Colored noise effects on batch attitude accuracy estimates
NASA Technical Reports Server (NTRS)
Bilanow, Stephen
1991-01-01
The effects of colored noise on the accuracy of batch least squares parameter estimates with applications to attitude determination cases are investigated. The standard approaches used for estimating the accuracy of a computed attitude commonly assume uncorrelated (white) measurement noise, while in actual flight experience measurement noise often contains significant time correlations and thus is colored. For example, horizon scanner measurements from low Earth orbit were observed to show correlations over many minutes in response to large scale atmospheric phenomena. A general approach to the analysis of the effects of colored noise is investigated, and interpretation of the resulting equations provides insight into the effects of any particular noise color and the worst case noise coloring for any particular parameter estimate. It is shown that for certain cases, the effects of relatively short term correlations can be accommodated by a simple correction factor. The errors in the predicted accuracy assuming white noise and the reduced accuracy due to the suboptimal nature of estimators that do not take into account the noise color characteristics are discussed. The appearance of a variety of sample noise color characteristics are demonstrated through simulation, and their effects are discussed for sample estimation cases. Based on the analysis, options for dealing with the effects of colored noise are discussed.
Chaix, Basile; Duncan, Dustin; Vallée, Julie; Vernez-Moudon, Anne; Benmarhnia, Tarik; Kestens, Yan
2017-11-01
Because of confounding from the urban/rural and socioeconomic organizations of territories and resulting correlation between residential and nonresidential exposures, classically estimated residential neighborhood-outcome associations capture nonresidential environment effects, overestimating residential intervention effects. Our study diagnosed and corrected this "residential" effect fallacy bias applicable to a large fraction of neighborhood and health studies. Our empirical application investigated the effect that hypothetical interventions raising the residential number of services would have on the probability that a trip is walked. Using global positioning systems tracking and mobility surveys over 7 days (227 participants and 7440 trips), we employed a multilevel linear probability model to estimate the trip-level association between residential number of services and walking to derive a naïve intervention effect estimate and a corrected model accounting for numbers of services at the residence, trip origin, and trip destination to determine a corrected intervention effect estimate (true effect conditional on assumptions). There was a strong correlation in service densities between the residential neighborhood and nonresidential places. From the naïve model, hypothetical interventions raising the residential number of services to 200, 500, and 1000 were associated with an increase by 0.020, 0.055, and 0.109 of the probability of walking in the intervention groups. Corrected estimates were of 0.007, 0.019, and 0.039. Thus, naïve estimates were overestimated by multiplicative factors of 3.0, 2.9, and 2.8. Commonly estimated residential intervention-outcome associations substantially overestimate true effects. Our somewhat paradoxical conclusion is that to estimate residential effects, investigators critically need information on nonresidential places visited.
Adjustment for reporting bias in network meta-analysis of antidepressant trials
2012-01-01
Background Network meta-analysis (NMA), a generalization of conventional MA, allows for assessing the relative effectiveness of multiple interventions. Reporting bias is a major threat to the validity of MA and NMA. Numerous methods are available to assess the robustness of MA results to reporting bias. We aimed to extend such methods to NMA. Methods We introduced 2 adjustment models for Bayesian NMA. First, we extended a meta-regression model that allows the effect size to depend on its standard error. Second, we used a selection model that estimates the propensity of trial results being published and in which trials with lower propensity are weighted up in the NMA model. Both models rely on the assumption that biases are exchangeable across the network. We applied the models to 2 networks of placebo-controlled trials of 12 antidepressants, with 74 trials in the US Food and Drug Administration (FDA) database but only 51 with published results. NMA and adjustment models were used to estimate the effects of the 12 drugs relative to placebo, the 66 effect sizes for all possible pair-wise comparisons between drugs, probabilities of being the best drug and ranking of drugs. We compared the results from the 2 adjustment models applied to published data and NMAs of published data and NMAs of FDA data, considered as representing the totality of the data. Results Both adjustment models showed reduced estimated effects for the 12 drugs relative to the placebo as compared with NMA of published data. Pair-wise effect sizes between drugs, probabilities of being the best drug and ranking of drugs were modified. Estimated drug effects relative to the placebo from both adjustment models were corrected (i.e., similar to those from NMA of FDA data) for some drugs but not others, which resulted in differences in pair-wise effect sizes between drugs and ranking. Conclusions In this case study, adjustment models showed that NMA of published data was not robust to reporting bias and provided estimates closer to that of NMA of FDA data, although not optimal. The validity of such methods depends on the number of trials in the network and the assumption that conventional MAs in the network share a common mean bias mechanism. PMID:23016799
Cypriano-Souza, Ana Lúcia; da Silva, Tiago Ferraz; Engel, Márcia H.; Bonatto, Sandro L.
2018-01-01
Abstract Genotypes of 10 microsatellite loci of 420 humpback whales from the Southwestern Atlantic Ocean population were used to estimate for the first time its contemporary effective (N e) and census (N c) population sizes and to test the genetic effect of commercial whaling. The results are in agreement with our previous studies that found high genetic diversity for this breeding population. Using an approximate Bayesian computation approach, the scenario of constant N e was significantly supported over scenarios with moderate to strong size changes during the commercial whaling period. The previous generation N c (N e multiplied by 3.6), which should corresponds to the years between around 1980 and 1990, was estimated between ~2,600 and 6,800 whales (point estimate ~4,000), and is broadly compatible with the recent abundance surveys extrapolated to the past using a growth rate of 7.4% per annum. The long-term N c in the constant scenario (point estimate ~15,000) was broadly compatible (considering the confidence interval) with pre-whaling catch records estimates (point estimate ~25,000). Overall, our results shown that the Southwestern Atlantic Ocean humpback whale population is genetically very diverse and resisted well to the strong population reduction during commercial whaling. PMID:29668011
Cypriano-Souza, Ana Lúcia; da Silva, Tiago Ferraz; Engel, Márcia H; Bonatto, Sandro L
2018-01-01
Genotypes of 10 microsatellite loci of 420 humpback whales from the Southwestern Atlantic Ocean population were used to estimate for the first time its contemporary effective (Ne) and census (Nc) population sizes and to test the genetic effect of commercial whaling. The results are in agreement with our previous studies that found high genetic diversity for this breeding population. Using an approximate Bayesian computation approach, the scenario of constant Ne was significantly supported over scenarios with moderate to strong size changes during the commercial whaling period. The previous generation Nc (Ne multiplied by 3.6), which should corresponds to the years between around 1980 and 1990, was estimated between ~2,600 and 6,800 whales (point estimate ~4,000), and is broadly compatible with the recent abundance surveys extrapolated to the past using a growth rate of 7.4% per annum. The long-term Nc in the constant scenario (point estimate ~15,000) was broadly compatible (considering the confidence interval) with pre-whaling catch records estimates (point estimate ~25,000). Overall, our results shown that the Southwestern Atlantic Ocean humpback whale population is genetically very diverse and resisted well to the strong population reduction during commercial whaling.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
NASA Astrophysics Data System (ADS)
Shoko, Cletah; Clark, David; Mengistu, Michael; Dube, Timothy; Bulcock, Hartley
2015-01-01
This study evaluated the effect of two readily available multispectral sensors: the newly launched 30 m spatial resolution Landsat 8 and the long-serving 1000 m moderate resolution imaging spectroradiometer (MODIS) datasets in the spatial representation of total evaporation in the heterogeneous uMngeni catchment, South Africa, using the surface energy balance system model. The results showed that sensor spatial resolution plays a critical role in the accurate estimation of energy fluxes and total evaporation across a heterogeneous catchment. Landsat 8 estimates showed better spatial representation of the biophysical parameters and total evaporation for different land cover types, due to the relatively higher spatial resolution compared to the coarse spatial resolution MODIS sensor. Moreover, MODIS failed to capture the spatial variations of total evaporation estimates across the catchment. Analysis of variance (ANOVA) results showed that MODIS-based total evaporation estimates did not show any significant differences across different land cover types (one-way ANOVA; F1.924=1.412, p=0.186). However, Landsat 8 images yielded significantly different estimates between different land cover types (one-way ANOVA; F1.993=5.185, p<0.001). The validation results showed that Landsat 8 estimates were more comparable to eddy covariance (EC) measurements than the MODIS-based total evaporation estimates. EC measurement on May 23, 2013, was 3.8 mm/day, whereas the Landsat 8 estimate on the same day was 3.6 mm/day, with MODIS showing significantly lower estimates of 2.3 mm/day. The findings of this study underscore the importance of spatial resolution in estimating spatial variations of total evaporation at the catchment scale, thus, they provide critical information on the relevance of the readily available remote sensing products in water resources management in data-scarce environments.
Measuring attention using flash-lag effect.
Shioiri, Satoshi; Yamamoto, Ken; Oshida, Hiroki; Matsubara, Kazuya; Yaguchi, Hirohisa
2010-08-13
We investigated the effect of attention on the flash-lag effect (FLE) in order to determine whether the FLE can be used to estimate the effect of visual attention. The FLE is the effect that a flash aligned with a moving object is perceived to lag the moving object, and several studies have shown that attention reduces its magnitude. We measured the FLE as a function of the number or speed of moving objects. The results showed that the effect of cueing, which we attributed the effect of attention, on the FLE increased monotonically with the number or the speed of the objects. This suggests that the amount of attention can be estimated by measuring the FLE, assuming that more amount of attention is required for a larger number or faster speed of objects to attend. On the basis of this presumption, we attempted to measure the spatial spread of visual attention by FLE measurements. The estimated spatial spreads were similar to those estimated by other experimental methods.
An evaluation of sex-age-kill (SAK) model performance
Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent
2009-01-01
The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.
Assessing non-additive effects in GBLUP model.
Vieira, I C; Dos Santos, J P R; Pires, L P M; Lima, B M; Gonçalves, F M A; Balestre, M
2017-05-10
Understanding non-additive effects in the expression of quantitative traits is very important in genotype selection, especially in species where the commercial products are clones or hybrids. The use of molecular markers has allowed the study of non-additive genetic effects on a genomic level, in addition to a better understanding of its importance in quantitative traits. Thus, the purpose of this study was to evaluate the behavior of the GBLUP model in different genetic models and relationship matrices and their influence on the estimates of genetic parameters. We used real data of the circumference at breast height in Eucalyptus spp and simulated data from a population of F 2 . Three commonly reported kinship structures in the literature were adopted. The simulation results showed that the inclusion of epistatic kinship improved prediction estimates of genomic breeding values. However, the non-additive effects were not accurately recovered. The Fisher information matrix for real dataset showed high collinearity in estimates of additive, dominant, and epistatic variance, causing no gain in the prediction of the unobserved data and convergence problems. Estimates presented differences of genetic parameters and correlations considering the different kinship structures. Our results show that the inclusion of non-additive effects can improve the predictive ability or even the prediction of additive effects. However, the high distortions observed in the variance estimates when the Hardy-Weinberg equilibrium assumption is violated due to the presence of selection or inbreeding can converge at zero gains in models that consider epistasis in genomic kinship.
Singh, M Suheshkumar; Yalavarthy, Phaneendra K; Vasu, R M; Rajan, K
2010-07-01
To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms. A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model based numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations.
Pollution, Health, and Avoidance Behavior: Evidence from the Ports of Los Angeles
ERIC Educational Resources Information Center
Moretti, Enrico; Neidell, Matthew
2011-01-01
A pervasive problem in estimating the costs of pollution is that optimizing individuals may compensate for increases in pollution by reducing their exposure, resulting in estimates that understate the full welfare costs. To account for this issue, measurement error, and environmental confounding, we estimate the health effects of ozone using daily…
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
[Effect of speech estimation on social anxiety].
Shirotsuki, Kentaro; Sasagawa, Satoko; Nomura, Shinobu
2009-02-01
This study investigates the effect of speech estimation on social anxiety to further understanding of this characteristic of Social Anxiety Disorder (SAD). In the first study, we developed the Speech Estimation Scale (SES) to assess negative estimation before giving a speech which has been reported to be the most fearful social situation in SAD. Undergraduate students (n = 306) completed a set of questionnaires, which consisted of the Short Fear of Negative Evaluation Scale (SFNE), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS), and the SES. Exploratory factor analysis showed an adequate one-factor structure with eight items. Further analysis indicated that the SES had good reliability and validity. In the second study, undergraduate students (n = 315) completed the SFNE, SIAS, SPS, SES, and the Self-reported Depression Scale (SDS). The results of path analysis showed that fear of negative evaluation from others (FNE) predicted social anxiety, and speech estimation mediated the relationship between FNE and social anxiety. These results suggest that speech estimation might maintain SAD symptoms, and could be used as a specific target for cognitive intervention in SAD.
Choosing Models for Health Care Cost Analyses: Issues of Nonlinearity and Endogeneity
Garrido, Melissa M; Deb, Partha; Burgess, James F; Penrod, Joan D
2012-01-01
Objective To compare methods of analyzing endogenous treatment effect models for nonlinear outcomes and illustrate the impact of model specification on estimates of treatment effects such as health care costs. Data Sources Secondary data on cost and utilization for inpatients hospitalized in five Veterans Affairs acute care facilities in 2005–2006. Study Design We compare results from analyses with full information maximum simulated likelihood (FIMSL); control function (CF) approaches employing different types and functional forms for the residuals, including the special case of two-stage residual inclusion; and two-stage least squares (2SLS). As an example, we examine the effect of an inpatient palliative care (PC) consultation on direct costs of care per day. Data Collection/Extraction Methods We analyzed data for 3,389 inpatients with one or more life-limiting diseases. Principal Findings The distribution of average treatment effects on the treated and local average treatment effects of a PC consultation depended on model specification. CF and FIMSL estimates were more similar to each other than to 2SLS estimates. CF estimates were sensitive to choice and functional form of residual. Conclusions When modeling cost or other nonlinear data with endogeneity, one should be aware of the impact of model specification and treatment effect choice on results. PMID:22524165
The Distributional Impact of Social Security Policy Options.
Couch, Kenneth A; Reznik, Gayle L; Tamborini, Christopher R; Iams, Howard M
2017-01-01
Using microsimulation, we estimate the effects of three policy proposals that would alter Social Security's eligibility rules or benefit structure to reflect changes in women's labor force activity, marital patterns, and differential mortality among the aged. First, we estimate a set of options related to the duration of marriage required to receive divorced spouse and survivor benefits. Second, we estimate the effects of an earnings sharing proposal with survivor benefits, in which benefits are based entirely on earned benefits with spouses sharing their earnings during years of marriage. Third, we estimate the effects of adjusting benefits to reflect the increasing differential life expectancy by lifetime earnings. The results advance our understanding of the distributional effects of these alternative policy options on projected benefits and retirement income, including poverty and supplemental poverty status, of divorced and widowed women aged 60 or older in 2030.
Huang, Guowen; Lee, Duncan; Scott, E Marian
2018-03-30
The long-term health effects of air pollution are often estimated using a spatio-temporal ecological areal unit study, but this design leads to the following statistical challenges: (1) how to estimate spatially representative pollution concentrations for each areal unit; (2) how to allow for the uncertainty in these estimated concentrations when estimating their health effects; and (3) how to simultaneously estimate the joint effects of multiple correlated pollutants. This article proposes a novel 2-stage Bayesian hierarchical model for addressing these 3 challenges, with inference based on Markov chain Monte Carlo simulation. The first stage is a multivariate spatio-temporal fusion model for predicting areal level average concentrations of multiple pollutants from both monitored and modelled pollution data. The second stage is a spatio-temporal model for estimating the health impact of multiple correlated pollutants simultaneously, which accounts for the uncertainty in the estimated pollution concentrations. The novel methodology is motivated by a new study of the impact of both particulate matter and nitrogen dioxide concentrations on respiratory hospital admissions in Scotland between 2007 and 2011, and the results suggest that both pollutants exhibit substantial and independent health effects. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Targeted On-Demand Team Performance App Development
2016-10-01
from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes. What opportunities for
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Biemans, Floor; de Jong, Mart C M; Bijma, Piter
2017-06-30
Infectious diseases in farm animals affect animal health, decrease animal welfare and can affect human health. Selection and breeding of host individuals with desirable traits regarding infectious diseases can help to fight disease transmission, which is affected by two types of (genetic) traits: host susceptibility and host infectivity. Quantitative genetic studies on infectious diseases generally connect an individual's disease status to its own genotype, and therefore capture genetic effects on susceptibility only. However, they usually ignore variation in exposure to infectious herd mates, which may limit the accuracy of estimates of genetic effects on susceptibility. Moreover, genetic effects on infectivity will exist as well. Thus, to design optimal breeding strategies, it is essential that genetic effects on infectivity are quantified. Given the potential importance of genetic effects on infectivity, we set out to develop a model to estimate the effect of single nucleotide polymorphisms (SNPs) on both host susceptibility and host infectivity. To evaluate the quality of the resulting SNP effect estimates, we simulated an endemic disease in 10 groups of 100 individuals, and recorded time-series data on individual disease status. We quantified bias and precision of the estimates for different sizes of SNP effects, and identified the optimum recording interval when the number of records is limited. We present a generalized linear mixed model to estimate the effect of SNPs on both host susceptibility and host infectivity. SNP effects were on average slightly underestimated, i.e. estimates were conservative. Estimates were less precise for infectivity than for susceptibility. Given our sample size, the power to estimate SNP effects for susceptibility was 100% for differences between genotypes of a factor 1.56 or more, and was higher than 60% for infectivity for differences between genotypes of a factor 4 or more. When disease status was recorded 11 times on each animal, the optimal recording interval was 25 to 50% of the average infectious period. Our model was able to estimate genetic effects on susceptibility and infectivity. In future genome-wide association studies, it may serve as a starting point to identify genes that affect disease transmission and disease prevalence.
2011-01-01
Background The identification of genes or quantitative trait loci that are expressed in response to different environmental factors such as temperature and light, through functional mapping, critically relies on precise modeling of the covariance structure. Previous work used separable parametric covariance structures, such as a Kronecker product of autoregressive one [AR(1)] matrices, that do not account for interaction effects of different environmental factors. Results We implement a more robust nonparametric covariance estimator to model these interactions within the framework of functional mapping of reaction norms to two signals. Our results from Monte Carlo simulations show that this estimator can be useful in modeling interactions that exist between two environmental signals. The interactions are simulated using nonseparable covariance models with spatio-temporal structural forms that mimic interaction effects. Conclusions The nonparametric covariance estimator has an advantage over separable parametric covariance estimators in the detection of QTL location, thus extending the breadth of use of functional mapping in practical settings. PMID:21269481
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.
2013-08-01
A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.
Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Janke, Robert
Network model detail can influence the accuracy of results from analyses of water distribution systems. Some previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregatedmore » adverse effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. But, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less
Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Janke, Robert
Network model detail can influence the accuracy of results from analyses of water distribution systems. Previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregated adversemore » effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. However, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less
Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events
Davis, Michael J.; Janke, Robert
2015-01-01
Network model detail can influence the accuracy of results from analyses of water distribution systems. Some previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregatedmore » adverse effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. But, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less
Statistical methods for incomplete data: Some results on model misspecification.
McIsaac, Michael; Cook, R J
2017-02-01
Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
Roso, V M; Schenkel, F S; Miller, S P; Schaeffer, L R
2005-08-01
Breed additive, dominance, and epistatic loss effects are of concern in the genetic evaluation of a multibreed population. Multiple regression equations used for fitting these effects may show a high degree of multicollinearity among predictor variables. Typically, when strong linear relationships exist, the regression coefficients have large SE and are sensitive to changes in the data file and to the addition or deletion of variables in the model. Generalized ridge regression methods were applied to obtain stable estimates of direct and maternal breed additive, dominance, and epistatic loss effects in the presence of multicollinearity among predictor variables. Preweaning weight gains of beef calves in Ontario, Canada, from 1986 to 1999 were analyzed. The genetic model included fixed direct and maternal breed additive, dominance, and epistatic loss effects, fixed environmental effects of age of the calf, contemporary group, and age of the dam x sex of the calf, random additive direct and maternal genetic effects, and random maternal permanent environment effect. The degree and the nature of the multicollinearity were identified and ridge regression methods were used as an alternative to ordinary least squares (LS). Ridge parameters were obtained using two different objective methods: 1) generalized ridge estimator of Hoerl and Kennard (R1); and 2) bootstrap in combination with cross-validation (R2). Both ridge regression methods outperformed the LS estimator with respect to mean squared error of predictions (MSEP) and variance inflation factors (VIF) computed over 100 bootstrap samples. The MSEP of R1 and R2 were similar, and they were 3% less than the MSEP of LS. The average VIF of LS, R1, and R2 were equal to 26.81, 6.10, and 4.18, respectively. Ridge regression methods were particularly effective in decreasing the multicollinearity involving predictor variables of breed additive effects. Because of a high degree of confounding between estimates of maternal dominance and direct epistatic loss effects, it was not possible to compare the relative importance of these effects with a high level of confidence. The inclusion of epistatic loss effects in the additive-dominance model did not cause noticeable reranking of sires, dams, and calves based on across-breed EBV. More precise estimates of breed effects as a result of this study may result in more stable across-breed estimated breeding values over the years.
A Bayesian Approach to More Stable Estimates of Group-Level Effects in Contextual Studies.
Zitzmann, Steffen; Lüdtke, Oliver; Robitzsch, Alexander
2015-01-01
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Post, Ellen S.; Grambsch, Anne; Weaver, Chris; Morefield, Philip; Leung, Lai-Yung; Nolte, Christopher G.; Adams, Peter; Liang, Xin-Zhong; Zhu, Jin-Hong; Mahoney, Hardee
2012-01-01
Background: Future climate change may cause air quality degradation via climate-induced changes in meteorology, atmospheric chemistry, and emissions into the air. Few studies have explicitly modeled the potential relationships between climate change, air quality, and human health, and fewer still have investigated the sensitivity of estimates to the underlying modeling choices. Objectives: Our goal was to assess the sensitivity of estimated ozone-related human health impacts of climate change to key modeling choices. Methods: Our analysis included seven modeling systems in which a climate change model is linked to an air quality model, five population projections, and multiple concentration–response functions. Using the U.S. Environmental Protection Agency’s (EPA’s) Environmental Benefits Mapping and Analysis Program (BenMAP), we estimated future ozone (O3)-related health effects in the United States attributable to simulated climate change between the years 2000 and approximately 2050, given each combination of modeling choices. Health effects and concentration–response functions were chosen to match those used in the U.S. EPA’s 2008 Regulatory Impact Analysis of the National Ambient Air Quality Standards for O3. Results: Different combinations of methodological choices produced a range of estimates of national O3-related mortality from roughly 600 deaths avoided as a result of climate change to 2,500 deaths attributable to climate change (although the large majority produced increases in mortality). The choice of the climate change and the air quality model reflected the greatest source of uncertainty, with the other modeling choices having lesser but still substantial effects. Conclusions: Our results highlight the need to use an ensemble approach, instead of relying on any one set of modeling choices, to assess the potential risks associated with O3-related human health effects resulting from climate change. PMID:22796531
Da, Yang
2015-12-18
The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.
Cross-seasonal effects on waterfowl productivity: Implications under climate change
Osnas, Erik; Zhao, Qing; Runge, Michael C.; Boomer, G Scott
2016-01-01
Previous efforts to relate winter-ground precipitation to subsequent reproductive success as measured by the ratio of juveniles to adults in the autumn failed to account for increased vulnerability of juvenile ducks to hunting and uncertainty in the estimated age ratio. Neglecting increased juvenile vulnerability will positively bias the mean productivity estimate, and neglecting increased vulnerability and estimation uncertainty will positively bias the year-to-year variance in productivity because raw age ratios are the product of sampling variation, the year-specific vulnerability, and year-specific reproductive success. Therefore, we estimated the effects of cumulative winter precipitation in the California Central Valley and the Mississippi Alluvial Valley on pintail (Anas acuta) and mallard (Anas platyrhnchos) reproduction, respectively, using hierarchical Bayesian methods to correct for sampling bias in productivity estimates and observation error in covariates. We applied the model to a hunter-collected parts survey implemented by the United States Fish and Wildlife Service and band recoveries reported to the United States Geological Survey Bird Banding Laboratory using data from 1961 to 2013. We compared our results to previous estimates that used simple linear regression on uncorrected age ratios from a smaller subset of years in pintail (1961–1985). Like previous analyses, we found large and consistent effects of population size and wetland conditions in prairie Canada on mallard productivity, and large effects of population size and mean latitude of the observed breeding population on pintail productivity. Unlike previous analyses, we report a large amount of uncertainty in the estimated effects of wintering-ground precipitation on pintail and mallard productivity, with considerable uncertainty in the sign of the estimated main effect, although the posterior medians of precipitation effects were consistent with past studies. We found more consistent estimates in the sign of an interaction effect between population size and precipitation, suggesting that wintering-ground precipitation has a larger effect in years of high population size, especially for pintail. When we used the estimated effects in a population model to derive a sustainable harvest and population size projection (i.e., a yield curve), there was considerable uncertainty in the effect of increased or decreased wintering-ground precipitation on sustainable harvest potential and population size. These results suggest that the mechanism of cross-seasonal effects between winter habitat and reproduction in ducks occurs through a reduction in the strength of density dependence in years of above-average wintering-ground precipitation. We suggest additional investigation of the underlying mechanisms and that habitat managers and decision-makers consider the level of uncertainty in these estimates when attempting to integrate habitat management and harvest management decisions. Collection of annual data on the status of wintering-ground habitat in a rigorous sampling framework would likely be the most direct way to improve understanding of mechanisms and inform management.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Fatigue Level Estimation of Bill Based on Acoustic Signal Feature by Supervised SOM
NASA Astrophysics Data System (ADS)
Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa
Fatigued bills have harmful influence on daily operation of Automated Teller Machine(ATM). To make the fatigued bills classification more efficient, development of an automatic fatigued bill classification method is desired. We propose a new method to estimate bending rigidity of bill from acoustic signal feature of banking machines. The estimated bending rigidities are used as continuous fatigue level for classification of fatigued bill. By using the supervised Self-Organizing Map(supervised SOM), we estimate the bending rigidity from only the acoustic energy pattern effectively. The experimental result with real bill samples shows the effectiveness of the proposed method.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz
2011-01-01
SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander
2000-01-01
This paper presents a simple approach to estimate the uncertainties that arise in satellite retrievals of cloud optical depth when the retrievals use one-dimensional radiative transfer theory for heterogeneous clouds that have variations in all three dimensions. For the first time, preliminary error bounds are set to estimate the uncertainty of cloud optical depth retrievals. These estimates can help us better understand the nature of uncertainties that three-dimensional effects can introduce into retrievals of this important product of the MODIS instrument. The probability distribution of resulting retrieval errors is examined through theoretical simulations of shortwave cloud reflection for a wide variety of cloud fields. The results are used to illustrate how retrieval uncertainties change with observable and known parameters, such as solar elevation or cloud brightness. Furthermore, the results indicate that a tendency observed in an earlier study, clouds appearing thicker for oblique sun, is indeed caused by three-dimensional radiative effects.
The effects of survey question wording on rape estimates: evidence from a quasi-experimental design.
Fisher, Bonnie S
2009-02-01
The measurement of rape is among the leading methodological issues in the violence against women field. Methodological discussion continues to focus on decreasing measurement errors and improving the accuracy of rape estimates. The current study used a quasi-experimental design to examine the effect of survey question wording on estimates of completed and attempted rape and verbal threats of rape. Specifically, the study statistically compares self-reported rape estimates from two nationally representative studies of college women's sexual victimization experiences, the National College Women Sexual Victimization study and the National Violence Against College Women study. Results show significant differences between the two sets of rape estimates, with National Violence Against College Women study rape estimates ranging from 4.4% to 10.4% lower than the National College Women Sexual Victimization study rape estimates. Implications for future methodological research are discussed.
[Dual process in large number estimation under uncertainty].
Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento
2016-08-01
According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.
NASA Astrophysics Data System (ADS)
Jin, Minquan; Delshad, Mojdeh; Dwarakanath, Varadarajan; McKinney, Daene C.; Pope, Gary A.; Sepehrnoori, Kamy; Tilburg, Charles E.; Jackson, Richard E.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypothetical two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer tests results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, M.; Delshad, M.; Dwarakanath, V.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypotheticalmore » two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer test results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations. 43 refs., 10 figs., 1 tab.« less
Paek, Insu
2015-01-01
The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test characteristics on four confidence interval (CI) procedures for coefficient alpha in terms of coverage rate (CR), length, and the degree of asymmetry of CI estimates. In addition, interval estimates of coefficient alpha when data follow the essentially tau-equivalent condition were investigated as a supplement to the case of dichotomous data with examinee guessing. For dichotomous data with guessing, the results did not reveal salient negative effects of guessing and its interactions with other test characteristics (sample size, test length, coefficient alpha levels) on CR and the degree of asymmetry, but the effect of guessing was salient as a main effect and an interaction effect with sample size on the length of the CI estimates, making longer CI estimates as guessing increases, especially when combined with a small sample size. Other important effects (e.g., CI procedures on CR) are also discussed. PMID:29795863
Remote sensing of soil moisture content over bare fields at 1.4 GHz frequency
NASA Technical Reports Server (NTRS)
Wang, J. R.; Choudhury, B. J.
1980-01-01
A simple method of estimating moisture content (W) of a bare soil from the observed brightness temperature (T sub B) at 1.4 GHz is discussed. The method is based on a radiative transfer model calculation, which has been successfully used in the past to account for many observational results, with some modifications to take into account the effect of surface roughness. Besides the measured T sub B's, the three additional inputs required by the method are the effective soil thermodynamic temperature, the precise relation between W and the smooth field brightness temperature T sub B and a parameter specifying the surface roughness characteristics. The soil effective temperature can be readily measured and the procedures of estimating surface roughness parameter and obtaining the relation between W and smooth field brightness temperature are discussed in detail. Dual polarized radiometric measurements at an off-nadir incident angle are sufficient to estimate both surface roughness parameter and W, provided that the relation between W and smooth field brightness temperature at the same angle is known. The method of W estimate is demonstrated with two sets of experimental data, one from a controlled field experiment by a mobile tower and the other, from aircraft overflight. The results from both data sets are encouraging when the estimated W's are compared with the acquired ground truth of W's in the top 2 cm layer. An offset between the estimated and the measured W's exists in the results of the analyses, but that can be accounted for by the presently poor knowledge of the relationship between W and smooth field brightness temperature for various types of soils. An approach to quantify this relationship for different soils and thus improve the method of W estimate is suggested.
Performance of the split-symbol moments SNR estimator in the presence of inter-symbol interference
NASA Technical Reports Server (NTRS)
Shah, B.; Hinedi, S.
1989-01-01
The Split-Symbol Moments Estimator (SSME) is an algorithm that is designed to estimate symbol signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise (AWGN). The performance of the SSME algorithm in band-limited channels is examined. The effects of the resulting inter-symbol interference (ISI) are quantified. All results obtained are in closed form and can be easily evaluated numerically for performance prediction purposes. Furthermore, they are validated through digital simulations.
A Fourier approach to cloud motion estimation
NASA Technical Reports Server (NTRS)
Arking, A.; Lo, R. C.; Rosenfield, A.
1977-01-01
A Fourier technique is described for estimating cloud motion from pairs of pictures using the phase of the cross spectral density. The method allows motion estimates to be made for individual spatial frequencies, which are related to cloud pattern dimensions. Results obtained are presented and compared with the results of a Fourier domain cross correlation scheme. Using both artificial and real cloud data show that the technique is relatively sensitive to the presence of mixtures of motions, changes in cloud shape, and edge effects.
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2016-05-01
Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
NASA Astrophysics Data System (ADS)
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje
2016-10-01
To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.
Problems and solutions in the estimation of genetic risks from radiation and chemicals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, W. L.
1980-01-01
Extensive investigations with mice on the effects of various physical and biological factors, such as dose rate, sex and cell stage, on radiation-induced mutation have provided an evaluation of the genetics hazards of radiation in man. The mutational results obtained in both sexes with progressive lowering of the radiation dose rate have permitted estimation of the mutation frequency expected under the low-level radiation conditions of most human exposure. Supplementing the studies on mutation frequency are investigations on the phenotypic effects of mutations in mice, particularly anatomical disorders of the skeleton, which allow an estimation of the degree of human handicapmore » associated with the occurrence of parallel defects in man. Estimation of the genetic risk from chemical mutagens is much more difficult, and the research is much less advanced. Results on transmitted mutations in mice indicate a poor correlation with mutation induction in non-mammalian organisms.« less
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
What is the effect of area size when using local area practice style as an instrument?
Brooks, John M; Tang, Yuexin; Chapman, Cole G; Cook, Elizabeth A; Chrischilles, Elizabeth A
2013-08-01
Discuss the tradeoffs inherent in choosing a local area size when using a measure of local area practice style as an instrument in instrumental variable estimation when assessing treatment effectiveness. Assess the effectiveness of angiotensin converting-enzyme inhibitors and angiotensin receptor blockers on survival after acute myocardial infarction for Medicare beneficiaries using practice style instruments based on different-sized local areas around patients. We contrasted treatment effect estimates using different local area sizes in terms of the strength of the relationship between local area practice styles and individual patient treatment choices; and indirect assessments of the assumption violations. Using smaller local areas to measure practice styles exploits more treatment variation and results in smaller standard errors. However, if treatment effects are heterogeneous, the use of smaller local areas may increase the risk that local practice style measures are dominated by differences in average treatment effectiveness across areas and bias results toward greater effectiveness. Local area practice style measures can be useful instruments in instrumental variable analysis, but the use of smaller local area sizes to generate greater treatment variation may result in treatment effect estimates that are biased toward higher effectiveness. Assessment of whether ecological bias can be mitigated by changing local area size requires the use of outside data sources. Copyright © 2013 Elsevier Inc. All rights reserved.
Reduced birthweight in short or primiparous mothers: physiological or pathological?
Zhang, X; Mumford, SL; Cnattingius, S; Schisterman, EF; Kramer, MS
2011-01-01
Objective Customisation of birthweight-for-gestational-age standards for maternal characteristics assumes that variation in birth weight as a result of those characteristics is physiological, rather than pathological. Maternal height and parity are among the characteristics widely assumed to be physiological. Our objective was to test that assumption by using an association with perinatal mortality as evidence of a pathological effect. Design Population-based cohort study. Setting Sweden. Population A total of 952 630 singletons born at ≥28 weeks of gestation in the period 1992–2001. Methods We compared perinatal mortality among mothers of short stature (<160 cm) versus those of normal height (≥160 cm), and primiparous versus multiparous mothers, using an internal reference of estimated fetal weight for gestational age. The total effects of maternal height and parity were estimated, as well as the effects of height and parity independent of birthweight (controlled direct effects). All analyses were based on fetuses at risk, using marginal structural Cox models for the estimation of total and controlled direct effects. Main outcome measures Perinatal mortality, stillbirth, and early neonatal mortality. Results The estimated total effect (HR; 95% CI) of short stature on perinatal death among short mothers was 1.2 (95% CI 1.1–1.3) compared with women of normal height; the effect of short stature independent of birthweight (controlled direct effect) was 0.8 (95% CI 0.6–1.0) among small-for-gestational-age (SGA) births, but 1.1 (95% CI 1.0–1.3) among non-SGA births. Similar results were observed for primiparous mothers. Conclusions The effect of maternal short stature or primiparity on perinatal mortality is partly mediated through SGA birth. Thus, birthweight differences resulting from these maternal characteristics appear not only to be physiological, but also to have an important pathological component. PMID:20618317
Smoking, health knowledge, and anti-smoking campaigns: an empirical study in Taiwan.
Hsieh, C R; Yen, L L; Liu, J T; Lin, C J
1996-02-01
This paper uses a measure of health knowledge of smoking hazards to investigate the determinants of health knowledge and its effect on smoking behavior. In our analysis, two equations are estimated: smoking participation and health knowledge. The simultaneity problem in estimating smoking behavior and health knowledge is also considered. Overall, the estimated results suggest that anti-smoking campaigns have a significantly positive effect on the public's health knowledge, and this health knowledge in turn, has a significantly negative effect on smoking participation. The health knowledge elasticities of smoking participation are -0.48 and -0.56 for all adults and adult males, respectively.
2013-01-01
Background Depression is common and treatable with cognitive behavior therapy (CBT), for example. However, access to this therapy is limited. Internet-based interventions have been found to be effective in reducing symptoms of depression. The International Society for Research on Internet Interventions has highlighted the importance of translating effective Internet programs into multiple languages to enable worldwide dissemination. Objective The aim of the current study was to determine if it would be cost effective to translate an existing English-language Internet-based intervention for use in a non-English-speaking country. Methods This paper reports an evaluation of a trial in which a research group in Norway translated two English-language Internet-based interventions into Norwegian (MoodGYM and BluePages) that had previously been shown to reduce symptoms of depression. The translation process and estimates of the cost-effectiveness of such a translation process is described. Estimated health effect was found by using quality-adjusted life years (QALY). Results Conservative estimates indicate that for every 1000 persons treated, 16 QALYs are gained. The investment is returned 9 times and the cost-effectiveness ratio (CER) is 3432. The costs of the translation project totaled to approximately 27% of the estimated original English-language version development costs. Conclusions The economic analysis shows that the cost-effectiveness of the translation project was substantial. Hopefully, these results will encourage others to do similar analyses and report cost-effectiveness data in their research reports. PMID:23343481
Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments
Erkoc, Ali; Emiroglu, Esra
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738
Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.
Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Influence of various water quality sampling strategies on load estimates for small streams
Robertson, Dale M.; Roerish, Eric D.
1999-01-01
Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.
Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.
Wang, Yibin; Nedelman, Jerry
2002-04-01
To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.
A study of lens opacification for a Mars mission
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Wilson, J. W.; Cox, A. B.; Lett, J. T.
1991-01-01
A method based on risk-related cross sections is used to estimate risks of 'stationary' cataracts caused by radiation exposures during extended missions in deep space. Estimates of the even more important risk of late degenerative cataractogenesis are made on the basis of the limited data available. Data on lenticular opacification in the New Zealand white rabbit, an animal model from which such results can be extrapolated to humans, are analyzed by the Langley cosmic ray shielding code (HZETRN) to generate estimates of stationary cataract formation resulting from a Mars mission. The effects of the composition of shielding material and the relationship between risk and LET are given, and the effects of target fragmentation on the risk coefficients are evaluated explicitly.
Stevenson, James R; Villoria, Nelson; Byerlee, Derek; Kelley, Timothy; Maredia, Mywish
2013-05-21
New estimates of the impacts of germplasm improvement in the major staple crops between 1965 and 2004 on global land-cover change are presented, based on simulations carried out using a global economic model (Global Trade Analysis Project Agro-Ecological Zone), a multicommodity, multiregional computable general equilibrium model linked to a global spatially explicit database on land use. We estimate the impact of removing the gains in cereal productivity attributed to the widespread adoption of improved varieties in developing countries. Here, several different effects--higher yields, lower prices, higher land rents, and trade effects--have been incorporated in a single model of the impact of Green Revolution research (and subsequent advances in yields from crop germplasm improvement) on land-cover change. Our results generally support the Borlaug hypothesis that increases in cereal yields as a result of widespread adoption of improved crop germplasm have saved natural ecosystems from being converted to agriculture. However, this relationship is complex, and the net effect is of a much smaller magnitude than Borlaug proposed. We estimate that the total crop area in 2004 would have been between 17.9 and 26.7 million hectares larger in a world that had not benefited from crop germplasm improvement since 1965. Of these hectares, 12.0-17.7 million would have been in developing countries, displacing pastures and resulting in an estimated 2 million hectares of additional deforestation. However, the negative impacts of higher food prices on poverty and hunger under this scenario would likely have dwarfed the welfare effects of agricultural expansion.
Estimation of effective connectivity using multi-layer perceptron artificial neural network.
Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman
2018-02-01
Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.
Prevalence of Individuals Experiencing the Effects of Stroke in Canada: Trends and Projections.
Krueger, Hans; Koot, Jacqueline; Hall, Ruth E; O'Callaghan, Christina; Bayley, Mark; Corbett, Dale
2015-08-01
Previous estimates of the number and prevalence of individuals experiencing the effects of stroke in Canada are out of date and exclude critical population groups. It is essential to have complete data that report on stroke disability for monitoring and planning purposes. The objective was to provide an updated estimate of the number of individuals experiencing the effects of stroke in Canada (and its regions), trending since 2000 and forecasted prevalence to 2038. The prevalence, trends, and projected number of individuals experiencing the effects of stroke were estimated using region-specific survey data and adjusted to account for children aged <12 years and individuals living in homes for the aged. In 2013, we estimate that there were 405 000 individuals experiencing the effects of stroke in Canada, yielding a prevalence of 1.15%. This value is expected to increase to between 654 000 and 726 000 by 2038. Trends in stroke data between 2000 and 2012 suggest a nonsignificant decrease in stroke prevalence, but a substantial and rising increase in the number of individuals experiencing the effects of stroke. Stroke prevalence varied considerably between regions. Previous estimates of stroke prevalence have underestimated the true number of individuals experiencing the effects of stroke in Canada. Furthermore, the projected increases that will result from population growth and demographic changes highlight the importance of maintaining up-to-date estimates. © 2015 American Heart Association, Inc.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
A cost-effectiveness comparison of existing and Landsat-aided snow water content estimation systems
NASA Technical Reports Server (NTRS)
Sharp, J. M.; Thomas, R. W.
1975-01-01
This study describes how Landsat imagery can be cost-effectively employed to augment an operational hydrologic model. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model presently used by the California Department of Water Resources. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the Landsat-aided approach.
Evaluation of DOTD's Existing Queue Estimation Procedures : Research Project Capsule
DOT National Transportation Integrated Search
2017-10-01
The primary objective of this study is to evaluate the effectiveness of DOTDs queue estimation procedures by comparing results with those obtained directly from site observations through video camera footage or other means. Actual queue start time...
Abrams, Keith R.; Amonkar, Mayur M.; Stapelkamp, Ceilidh; Swann, R. Suzanne
2015-01-01
Background. Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48–1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Materials and Methods. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, “treatment group” (assumes treatment effect could continue until death) and “on-treatment observed” (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. Results. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE “treatment group” and “on-treatment observed” analyses performed similarly well. Conclusion. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching—a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Implications for Practice: Treatment switching is common in oncology trials, and the implications of this for the interpretation of the clinical effectiveness and cost-effectiveness of the novel treatment are important to consider. If patients who switch treatments benefit from the experimental treatment and a standard intention-to-treat analysis is conducted, the overall survival advantage associated with the new treatment could be underestimated. The present study applied established statistical methods to adjust for treatment switching in a trial that compared dabrafenib and dacarbazine for metastatic melanoma. The results showed that this led to a substantially increased estimate of the overall survival treatment effect associated with dabrafenib. PMID:26040620
Flannagan, Carol A. C.; Bingham, C. Raymond; Cunningham, Rebecca M.; Rupp, Jonathan D.
2015-01-01
Objectives. We estimated the injury prevention impact and cost savings associated with alcohol interlock installation in all new US vehicles. Methods. We identified fatal and nonfatal injuries associated with drinking driver vehicle crashes from the Fatality Analysis Reporting System and National Automotive Sampling System’s General Estimates System data sets (2006–2010). We derived the estimated impact of universal interlock installation using an estimate of the proportion of alcohol-related crashes that were preventable in vehicles < 1 year-old. We repeated this analysis for each subsequent year, assuming a 15-year implementation. We applied existing crash-induced injury cost metrics to approximate economic savings, and we used a sensitivity analysis to examine results with varying device effectiveness. Results. Over 15 years, 85% of crash fatalities (> 59 000) and 84% to 88% of nonfatal injuries (> 1.25 million) attributed to drinking drivers would be prevented, saving an estimated $342 billion in injury-related costs, with the greatest injury and cost benefit realized among recently legal drinking drivers. Cost savings outweighed installation costs after 3 years, with the policy remaining cost effective provided device effectiveness remained above approximately 25%. Conclusions. Alcohol interlock installation in all new vehicles is likely a cost-effective primary prevention policy that will substantially reduce alcohol-involved crash fatalities and injuries, especially among young vulnerable drivers. PMID:25790385
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Gene–Environment Correlation: Difficulties and a Natural Experiment–Based Strategy
Li, Jiang; Liu, Hexuan; Guo, Guang
2013-01-01
Objectives. We explored how gene–environment correlations can result in endogenous models, how natural experiments can protect against this threat, and if unbiased estimates from natural experiments are generalizable to other contexts. Methods. We compared a natural experiment, the College Roommate Study, which measured genes and behaviors of college students and their randomly assigned roommates in a southern public university, with observational data from the National Longitudinal Study of Adolescent Health in 2008. We predicted exposure to exercising peers using genetic markers and estimated environmental effects on alcohol consumption. A mixed-linear model estimated an alcohol consumption variance that was attributable to genetic markers and across peer environments. Results. Peer exercise environment was associated with respondent genotype in observational data, but not in the natural experiment. The effects of peer drinking and presence of a general gene–environment interaction were similar between data sets. Conclusions. Natural experiments, like random roommate assignment, could protect against potential bias introduced by gene–environment correlations. When combined with representative observational data, unbiased and generalizable causal effects could be estimated. PMID:23927502
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
Artificial neural networks for stiffness estimation in magnetic resonance elastography.
Murphy, Matthew C; Manduca, Armando; Trzasko, Joshua D; Glaser, Kevin J; Huston, John; Ehman, Richard L
2018-07-01
To investigate the feasibility of using artificial neural networks to estimate stiffness from MR elastography (MRE) data. Artificial neural networks were fit using model-based training patterns to estimate stiffness from images of displacement using a patch size of ∼1 cm in each dimension. These neural network inversions (NNIs) were then evaluated in a set of simulation experiments designed to investigate the effects of wave interference and noise on NNI accuracy. NNI was also tested in vivo, comparing NNI results against currently used methods. In 4 simulation experiments, NNI performed as well or better than direct inversion (DI) for predicting the known stiffness of the data. Summary NNI results were also shown to be significantly correlated with DI results in the liver (R 2 = 0.974) and in the brain (R 2 = 0.915), and also correlated with established biological effects including fibrosis stage in the liver and age in the brain. Finally, repeatability error was lower in the brain using NNI compared to DI, and voxel-wise modeling using NNI stiffness maps detected larger effects than using DI maps with similar levels of smoothing. Artificial neural networks represent a new approach to inversion of MRE data. Summary results from NNI and DI are highly correlated and both are capable of detecting biologically relevant signals. Preliminary evidence suggests that NNI stiffness estimates may be more resistant to noise than an algebraic DI approach. Taken together, these results merit future investigation into NNIs to improve the estimation of stiffness in small regions. Magn Reson Med 80:351-360, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Price and welfare effects of a pharmaceutical substitution reform.
Granlund, David
2010-12-01
The price effects of the Swedish pharmaceutical substitution reform are analyzed using data for a panel of all pharmaceutical product sold in Sweden in 1997-2007. The price reduction due to the reform was estimated to average 10% and was found to be significantly larger for brand-name pharmaceuticals than for generics. The results also imply that the reform amplified the effect that generic entry has on brand-name prices by a factor of 10. Results of a demand estimation imply that the price reductions increased total pharmaceutical consumption by 8% and consumer welfare by SEK 2.7 billion annually. Copyright © 2010 Elsevier B.V. All rights reserved.
HOMAIE RAD, Enayatollah; HADIAN, Mohamad; GHOLAMPOOR, Hanie
2014-01-01
Abstract Background Skilled labor force is very important in economic growth. Workers become skilled when they are healthy and able to be educated and work. In this study, we estimated the effects of health indicators on labor supply. We used labor force participation rate as the indicator of labor supply. We categorized this indicator into 2 indicators of female and male labor force participation rates and compared the results of each estimate with the other. Methods This study was done in eastern Mediterranean countries between 1995 and 2011. We used a panel cointegration approach for estimating the models. We used Pesaran cross sectional dependency, Pesaran unit root test, and Westerlund panel cointegration for this issue. At the end, after confirmation of having random effect models, we estimated them with random effects. Results Increasing the fertility rate decreased the female labor supply, but increased the male labor supply. However, public health expenditures increased the female labor supply, but decreased the male labor supply because of substitution effects. Similar results were found regarding urbanization. Gross domestic product had a positive relationship with female labor supply, but not with male labor supply. Besides, out of pocket health expenditures had a negative relationship with male labor supply, but no significant relationships with female labor supply. Conclusion The effects of the health variables were more severe in the female labor supply model compared to the male model. Countries must pay attention to women’s health more and more to change the labor supply. PMID:26060746
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Rheingans, Richard; Anderson, John D; Anderson, Benjamin; Chakraborty, Poulomy; Atherly, Deborah; Pindolia, Deepa
2014-08-11
India accounts for 23% of global rotavirus mortality in under-five children, with more than 100,000 deaths from rotavirus annually. Introduction of a vaccine in India is considered to be the most effective intervention for preventing rotavirus mortality. Recent research suggests that there is considerable variation in rotavirus mortality burden across regional, gender and socio-economic subpopulations within India. In addition, there is potential variability in who would likely receive rotavirus vaccine if introduced. We use available household data to estimate heterogeneity in rotavirus mortality risk, vaccination benefits, and cost-effectiveness across geographic and socio-economic groups within India. We account for heterogeneity by modeling estimated three-dose routine vaccinations as a proxy for a generalized rotavirus vaccine, and mortality for subpopulations of children aggregated by region and state, socio-economic status and sex, separately. Results are presented for six geographic regions and for Bihar, Uttar Pradesh, and Madhya Pradesh, three high mortality states accounting for 56% of national mortality estimates. Impact estimates accounting for disparities predict rotavirus vaccine introduction will prevent 35,000 deaths at an average cost of $118/DALY averted (7292 INR/DALY averted). Rotavirus vaccines are most cost-effective for the poor living in high mortality regions and states. Reductions in geographic and socio-economic disparities based on regional estimates could prevent an additional 9400 deaths annually, while reductions in socio-economic disparities in the three highest morality states alone could prevent an additional 10,600 deaths annually. Understanding the impact of heterogeneity can help improve strategies to maximize the benefits of rotavirus vaccination introduction, leading to fewer lives lost as a result of rotavirus disease. Copyright © 2014. Published by Elsevier Ltd.
Size, skills, and suffrage: Motivated distortions in perceived formidability of political leaders
Blaker, Nancy M.; Pollet, Thomas V.
2017-01-01
Research shows that perception of physical size and status are positively associated. The current study was developed to replicate and extend earlier research on height perceptions of political leaders, indicating that supporters perceive their leaders as taller than non-supporters do, and winners are perceived as taller after the elections, while losers are perceived as shorter after the elections (winner/loser effects). Individuals use greater height and strength as indications of greater physical formidability. We hypothesized that in-group leaders’ height and strength, but not weight, would be overestimated more compared to out-group leaders’, and that this status-size association is not only driven by dominance, but also by prestige. We also tested whether previously found gender effects in estimates were due to using one’s own height as an anchor, and we used an improved methodological approach by relying on multiple measurements of physical formidability and a within-subject design for testing winner/loser effects. The results of a two-part longitudinal study (self-selected sample via voting advice website; NWave1 = 2,011; NWave2 = 322) suggest that estimated physical formidability of political leaders is affected by motivated perception, as prestige was positively associated with estimated formidability, and in-group leaders were estimated more formidable than out-group leaders. We conclude that distortions in judged formidability related to social status are the result of motivated social perception in order to promote group functioning and leadership. Although we did not replicate a winner-effect (greater estimations of formidability after winning the elections), we did find some evidence for a loser-effect. Earlier suggestions that men make larger estimations than women because of their own larger body size are not supported. Implications for theory and future research are discussed. PMID:29267275
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Spectral Estimation Model Construction of Heavy Metals in Mining Reclamation Areas
Dong, Jihong; Dai, Wenting; Xu, Jiren; Li, Songnian
2016-01-01
The study reported here examined, as the research subject, surface soils in the Liuxin mining area of Xuzhou, and explored the heavy metal content and spectral data by establishing quantitative models with Multivariable Linear Regression (MLR), Generalized Regression Neural Network (GRNN) and Sequential Minimal Optimization for Support Vector Machine (SMO-SVM) methods. The study results are as follows: (1) the estimations of the spectral inversion models established based on MLR, GRNN and SMO-SVM are satisfactory, and the MLR model provides the worst estimation, with R2 of more than 0.46. This result suggests that the stress sensitive bands of heavy metal pollution contain enough effective spectral information; (2) the GRNN model can simulate the data from small samples more effectively than the MLR model, and the R2 between the contents of the five heavy metals estimated by the GRNN model and the measured values are approximately 0.7; (3) the stability and accuracy of the spectral estimation using the SMO-SVM model are obviously better than that of the GRNN and MLR models. Among all five types of heavy metals, the estimation for cadmium (Cd) is the best when using the SMO-SVM model, and its R2 value reaches 0.8628; (4) using the optimal model to invert the Cd content in wheat that are planted on mine reclamation soil, the R2 and RMSE between the measured and the estimated values are 0.6683 and 0.0489, respectively. This result suggests that the method using the SMO-SVM model to estimate the contents of heavy metals in wheat samples is feasible. PMID:27367708
Spectral Estimation Model Construction of Heavy Metals in Mining Reclamation Areas.
Dong, Jihong; Dai, Wenting; Xu, Jiren; Li, Songnian
2016-06-28
The study reported here examined, as the research subject, surface soils in the Liuxin mining area of Xuzhou, and explored the heavy metal content and spectral data by establishing quantitative models with Multivariable Linear Regression (MLR), Generalized Regression Neural Network (GRNN) and Sequential Minimal Optimization for Support Vector Machine (SMO-SVM) methods. The study results are as follows: (1) the estimations of the spectral inversion models established based on MLR, GRNN and SMO-SVM are satisfactory, and the MLR model provides the worst estimation, with R² of more than 0.46. This result suggests that the stress sensitive bands of heavy metal pollution contain enough effective spectral information; (2) the GRNN model can simulate the data from small samples more effectively than the MLR model, and the R² between the contents of the five heavy metals estimated by the GRNN model and the measured values are approximately 0.7; (3) the stability and accuracy of the spectral estimation using the SMO-SVM model are obviously better than that of the GRNN and MLR models. Among all five types of heavy metals, the estimation for cadmium (Cd) is the best when using the SMO-SVM model, and its R² value reaches 0.8628; (4) using the optimal model to invert the Cd content in wheat that are planted on mine reclamation soil, the R² and RMSE between the measured and the estimated values are 0.6683 and 0.0489, respectively. This result suggests that the method using the SMO-SVM model to estimate the contents of heavy metals in wheat samples is feasible.
Pixel-By Estimation of Scene Motion in Video
NASA Astrophysics Data System (ADS)
Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.
2017-05-01
The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.
NASA Astrophysics Data System (ADS)
Li, Yuankai; Ding, Liang; Zheng, Zhizhong; Yang, Qizhi; Zhao, Xingang; Liu, Guangjun
2018-05-01
For motion control of wheeled planetary rovers traversing on deformable terrain, real-time terrain parameter estimation is critical in modeling the wheel-terrain interaction and compensating the effect of wheel slipping. A multi-mode real-time estimation method is proposed in this paper to achieve accurate terrain parameter estimation. The proposed method is composed of an inner layer for real-time filtering and an outer layer for online update. In the inner layer, sinkage exponent and internal frictional angle, which have higher sensitivity than that of the other terrain parameters to wheel-terrain interaction forces, are estimated in real time by using an adaptive robust extended Kalman filter (AREKF), whereas the other parameters are fixed with nominal values. The inner layer result can help synthesize the current wheel-terrain contact forces with adequate precision, but has limited prediction capability for time-variable wheel slipping. To improve estimation accuracy of the result from the inner layer, an outer layer based on recursive Gauss-Newton (RGN) algorithm is introduced to refine the result of real-time filtering according to the innovation contained in the history data. With the two-layer structure, the proposed method can work in three fundamental estimation modes: EKF, REKF and RGN, making the method applicable for flat, rough and non-uniform terrains. Simulations have demonstrated the effectiveness of the proposed method under three terrain types, showing the advantages of introducing the two-layer structure.
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-01-01
Abstract Background Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. Methods We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Results Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Conclusions Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present. PMID:29088358
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn
2018-05-01
Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.
A seismic coherency method using spectral amplitudes
NASA Astrophysics Data System (ADS)
Sui, Jing-Kun; Zheng, Xiao-Dong; Li, Yan-Dong
2015-09-01
Seismic coherence is used to detect discontinuities in underground media. However, strata with steeply dipping structures often produce false low coherence estimates and thus incorrect discontinuity characterization results. It is important to eliminate or reduce the effect of dipping on coherence estimates. To solve this problem, time-domain dip scanning is typically used to improve estimation of coherence in areas with steeply dipping structures. However, the accuracy of the time-domain estimation of dip is limited by the sampling interval. In contrast, the spectrum amplitude is not affected by the time delays in adjacent seismic traces caused by dipping structures. We propose a coherency algorithm that uses the spectral amplitudes of seismic traces within a predefined analysis window to construct the covariance matrix. The coherency estimates with the proposed algorithm is defined as the ratio between the dominant eigenvalue and the sum of all eigenvalues of the constructed covariance matrix. Thus, we eliminate the effect of dipping structures on coherency estimates. In addition, because different frequency bands of spectral amplitudes are used to estimate coherency, the proposed algorithm has multiscale features. Low frequencies are effective for characterizing large-scale faults, whereas high frequencies are better in characterizing small-scale faults. Application to synthetic and real seismic data show that the proposed algorithm can eliminate the effect of dip and produce better coherence estimates than conventional coherency algorithms in areas with steeply dipping structures.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States
ERIC Educational Resources Information Center
Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.
2007-01-01
Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…
Presenting simulation results in a nested loop plot.
Rücker, Gerta; Schwarzer, Guido
2014-12-12
Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.
Effective diffusion coefficient including the Marangoni effect
NASA Astrophysics Data System (ADS)
Kitahata, Hiroyuki; Yoshinaga, Natsuhiko
2018-04-01
Surface-active molecules supplied from a particle fixed at the water surface create a spatial gradient of the molecule concentration, resulting in Marangoni convection. Convective flow transports the molecules far from the particle, enhancing diffusion. We analytically derive the effective diffusion coefficient associated with the Marangoni convection rolls. The resulting estimated effective diffusion coefficient is consistent with our numerical results and the apparent diffusion coefficient measured in experiments.
Serum uric acid and cancer mortality and incidence: a systematic review and meta-analysis.
Dovell, Frances; Boffetta, Paolo
2018-07-01
Elevated serum uric acid (SUA) is a marker of chronic inflammation and has been suggested to be associated with increased risk of cancer, but its antioxidant capacity would justify an anticancer effect. Previous meta-analyses did not include all available results. We conducted a systematic review of prospective studies on SUA level and risk of all cancers and specific cancers, a conducted a meta-analysis based on random-effects models for high versus low SUA level as well as for an increase in 1 mg/dl SUA. The relative risk of all cancers for high versus low SUA level was 1.11 (95% confidence interval: 0.94-1.27; 11 risk estimates); that for a mg/dl increase in SUA level was 1.03 (95% confidence interval: 0.99-1.07). Similar results were obtained for lung cancer (six risk estimates) and colon cancer (four risk estimates). Results for other cancers were sparse. Elevated SUA levels appear to be associated with a modest increase in overall cancer risk, although the combined risk estimate did not reach the formal level of statistical significance. Results for specific cancers were limited and mainly negative.
Estimation of river and stream temperature trends under haphazard sampling
Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao
2015-01-01
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.
Effect of education and clinical assessment on the accuracy of post partum blood loss estimation
2014-01-01
Background This research aimed to assess the effect of health care provider education on the accuracy of post partum blood loss estimation. Methods A non-randomized observational study that was conducted at King Abdulaziz Medical City, Riyadh, Saudi Arabia between January 1, 2011 and June 30, 2011. Hundred and twenty three health care providers who are involved in the estimation of post partum blood loss were eligible to participate. The participants were subjected to three research phases and an educational intervention. They have assessed a total of 30 different simulated blood loss stations, with 10 stations in each of the research phases. These phases took place before and after educational sessions on how to visually estimate blood loss and how to best utilize patient data in clinical scenarios. We have assessed the differences between the estimated blood loss and the actual measure. P-values were calculated to assess the differences between the three research phases estimations. Results The participants significantly under-estimated post partum blood loss. The accuracy was improved after training (p-value < 0.0001) and after analysing each patient’s clinical information (p-value = 0.042). The overall results were not affected by the participants’ clinical backgrounds or their years of experience. Under-estimation was more prominent in cases where more than average-excessive blood losses were simulated while over-estimations or accurate estimations were more prominent in less than average blood loss incidents. Conclusion Simple education programmes can improve traditional findings related to under-estimation of blood loss. More sophisticated clinical education programmes may provide additional improvements. PMID:24646156
Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L
2013-01-01
Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.
Anchoring in Numeric Judgments of Visual Stimuli
Langeborg, Linda; Eriksson, Mårten
2016-01-01
This article investigates effects of anchoring in age estimation and estimation of quantities, two tasks which to different extents are based on visual stimuli. The results are compared to anchoring in answers to classic general knowledge questions that rely on semantic knowledge. Cognitive load was manipulated to explore possible differences between domains. Effects of source credibility, manipulated by differing instructions regarding the selection of anchor values (no information regarding anchor selection, information that the anchors are randomly generated or information that the anchors are answers from an expert) on anchoring were also investigated. Effects of anchoring were large for all types of judgments but were not affected by cognitive load or by source credibility in either one of the researched domains. A main effect of cognitive load on quantity estimations and main effects of source credibility in the two visually based domains indicate that the manipulations were efficient. Implications for theoretical explanations of anchoring are discussed. In particular, because anchoring did not interact with cognitive load, the results imply that the process behind anchoring in visual tasks is predominantly automatic and unconscious. PMID:26941684
Use of allele scores as instrumental variables for Mendelian randomization
Burgess, Stephen; Thompson, Simon G
2013-01-01
Background An allele score is a single variable summarizing multiple genetic variants associated with a risk factor. It is calculated as the total number of risk factor-increasing alleles for an individual (unweighted score), or the sum of weights for each allele corresponding to estimated genetic effect sizes (weighted score). An allele score can be used in a Mendelian randomization analysis to estimate the causal effect of the risk factor on an outcome. Methods Data were simulated to investigate the use of allele scores in Mendelian randomization where conventional instrumental variable techniques using multiple genetic variants demonstrate ‘weak instrument’ bias. The robustness of estimates using the allele score to misspecification (for example non-linearity, effect modification) and to violations of the instrumental variable assumptions was assessed. Results Causal estimates using a correctly specified allele score were unbiased with appropriate coverage levels. The estimates were generally robust to misspecification of the allele score, but not to instrumental variable violations, even if the majority of variants in the allele score were valid instruments. Using a weighted rather than an unweighted allele score increased power, but the increase was small when genetic variants had similar effect sizes. Naive use of the data under analysis to choose which variants to include in an allele score, or for deriving weights, resulted in substantial biases. Conclusions Allele scores enable valid causal estimates with large numbers of genetic variants. The stringency of criteria for genetic variants in Mendelian randomization should be maintained for all variants in an allele score. PMID:24062299
Freeman, Matthew C; Stocks, Meredith E; Cumming, Oliver; Jeandron, Aurelie; Higgins, Julian P T; Wolf, Jennyfer; Prüss-Ustün, Annette; Bonjour, Sophie; Hunter, Paul R; Fewtrell, Lorna; Curtis, Valerie
2014-08-01
To estimate the global prevalence of handwashing with soap and derive a pooled estimate of the effect of hygiene on diarrhoeal diseases, based on a systematic search of the literature. Studies with data on observed rates of handwashing with soap published between 1990 and August 2013 were identified from a systematic search of PubMed, Embase and ISI Web of Knowledge. A separate search was conducted for studies on the effect of hygiene on diarrhoeal disease that included randomised controlled trials, quasi-randomised trials with control group, observational studies using matching techniques and observational studies with a control group where the intervention was well defined. The search used Cochrane Library, Global Health, BIOSIS, PubMed, and Embase databases supplemented with reference lists from previously published systematic reviews to identify studies published between 1970 and August 2013. Results were combined using multilevel modelling for handwashing prevalence and meta-regression for risk estimates. From the 42 studies reporting handwashing prevalence we estimate that approximately 19% of the world population washes hands with soap after contact with excreta (i.e. use of a sanitation facility or contact with children's excreta). Meta-regression of risk estimates suggests that handwashing reduces the risk of diarrhoeal disease by 40% (risk ratio 0.60, 95% CI 0.53-0.68); however, when we included an adjustment for unblinded studies, the effect estimate was reduced to 23% (risk ratio 0.77, 95% CI 0.32-1.86). Our results show that handwashing after contact with excreta is poorly practiced globally, despite the likely positive health benefits. © 2014 John Wiley & Sons Ltd.
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
NASA Technical Reports Server (NTRS)
Rosch, E.
1975-01-01
The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
NASA Astrophysics Data System (ADS)
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
The effect of prenatal care on birthweight: a full-information maximum likelihood approach.
Rous, Jeffrey J; Jewell, R Todd; Brown, Robert W
2004-03-01
This paper uses a full-information maximum likelihood estimation procedure, the Discrete Factor Method, to estimate the relationship between birthweight and prenatal care. This technique controls for the potential biases surrounding both the sample selection of the pregnancy-resolution decision and the endogeneity of prenatal care. In addition, we use the actual number of prenatal care visits; other studies have normally measured prenatal care as the month care is initiated. We estimate a birthweight production function using 1993 data from the US state of Texas. The results underscore the importance of correcting for estimation problems. Specifically, a model that does not control for sample selection and endogeneity overestimates the benefit of an additional visit for women who have relatively few visits. This overestimation may indicate 'positive fetal selection,' i.e., women who did not abort may have healthier babies. Also, a model that does not control for self-selection and endogenity predicts that past 17 visits, an additional visit leads to lower birthweight, while a model that corrects for these estimation problems predicts a positive effect for additional visits. This result shows the effect of mothers with less healthy fetuses making more prenatal care visits, known as 'adverse selection' in prenatal care. Copyright 2003 John Wiley & Sons, Ltd.
Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2016-04-01
The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.
Estimating the effects of wages on obesity.
Kim, DaeHwan; Leigh, John Paul
2010-05-01
To estimate the effects of wages on obesity and body mass. Data on household heads, aged 20 to 65 years, with full-time jobs, were drawn from the Panel Study of Income Dynamics for 2003 to 2007. The Panel Study of Income Dynamics is a nationally representative sample. Instrumental variables (IV) for wages were created using knowledge of computer software and state legal minimum wages. Least squares (linear regression) with corrected standard errors were used to estimate the equations. Statistical tests revealed both instruments were strong and tests for over-identifying restrictions were favorable. Wages were found to be predictive (P < 0.05) of obesity and body mass in regressions both before and after applying IVs. Coefficient estimates suggested stronger effects in the IV models. Results are consistent with the hypothesis that low wages increase obesity prevalence and body mass.
Exponentially convergent state estimation for delayed switched recurrent neural networks.
Ahn, Choon Ki
2011-11-01
This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, L.; Hill, W.J.
A method is proposed to estimate the effect of long-term variations in total ozone on the error incurred in determining a trend in total ozone due to man-made effects. When this method is applied to data from Arosa, Switzerland over the years 1932--1980, a component of the standard error of the trend estimate equal to 0.6 percent per decade is obtained. If this estimate of long-term trend variability at Arosa is not too different from global long-term trend variability, then the threshold ( +- 2 standard errors) for detecting an ozone trend in the 1970's that is outside of whatmore » could be expected from natural variation alone and hence be man-made would range from 1.35% (Reinsel et al, 1981) to 1.8%. The latter value is obtained by combining the Reinsel et al result with the result here, assuming that the error variations that both studies measure are independent and additive. Estimates for long-term trend variation over other time periods are also derived. Simulations that measure the precision of the estimate of long-term variability are reported.« less
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
NASA Astrophysics Data System (ADS)
Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei
2017-10-01
With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.
Using counterfactuals to evaluate the cost-effectiveness of controlling biological invasions.
McConnachie, Matthew M; van Wilgen, Brian W; Ferraro, Paul J; Forsyth, Aurelia T; Richardson, David M; Gaertner, Mirijam; Cowling, Richard M
2016-03-01
Prioritizing limited conservation funds for controlling biological invasions requires accurate estimates of the effectiveness of interventions to remove invasive species and their cost-effectiveness (cost per unit area or individual). Despite billions of dollars spent controlling biological invasions worldwide, it is unclear whether those efforts are effective, and cost-effective. The paucity of evidence results from the difficulty in measuring the effect of invasive species removal: a researcher must estimate the difference in outcomes (e.g. invasive species cover) between where the removal program intervened and what might have been observed if the program had not intervened. In the program evaluation literature, this is called a counterfactual analysis, which formally compares what actually happened and what would have happened in the absence of an intervention. When program implementation is not randomized, estimating counterfactual outcomes is especially difficult. We show how a thorough understanding of program implementation, combined with a matching empirical design can improve the way counterfactual outcomes are estimated in nonexperimental contexts. As a practical demonstration, we estimated the cost-effectiveness of South Africa's Working for Water program, arguably the world's most ambitious invasive species control program, in removing invasive alien trees from different land use types, across a large area in the Cape Floristic Region. We estimated that the proportion of the treatment area covered by invasive trees would have been 49% higher (5.5% instead of 2.7% of the grid cells occupied) had the program not intervened. Our estimates of cost per hectare to remove invasive species, however, are three to five times higher than the predictions made when the program was initiated. Had there been no control (counter-factual), invasive trees would have spread on untransformed land, but not on land parcels containing plantations or land transformed by agriculture or human settlements. This implies that the program might have prevented a larger area from being invaded if it had focused all of its clearing effort on untransformed land. Our results show that, with appropriate empirical designs, it is possible to better evaluate the impacts of invasive species removal and therefore to learn from past experiences.
Analysis of the effects of wing interference on the tail contributions to the rolling derivatives
NASA Technical Reports Server (NTRS)
Michael, William H , Jr
1952-01-01
An analysis of the effects of wing interference on the tail contributions to the rolling stability derivatives of complete airplane configurations is made by calculating the angularity of the air stream at the vertical tail due to rolling and determining the resulting forces and moments. Some of the important factors which affect the resultant angularity on the vertical tail are wing aspect ratio and sweepback, vertical-tail span, and considerations associated with angle of attack and airplane geometry. Some calculated sidewash results for a limited range of plan forms and vertical-tail sizes are presented. Equations taking into account the sidewash results are given for determining the tail contributions to the rolling derivatives. Comparisons of estimated and experimental results indicate that a consideration of wing interference effects improves the estimated values of the tail contributions to the rolling derivatives and that fair agreement with available experimental data is obtained.
Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.
2012-01-01
A Monte Carlo simulation was conducted to investigate the robustness of four latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of non-normality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly non-normal. When the violation of non-normality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly non-normal conditions, the GAPI and UPI approaches with ML estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type-I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the four approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance. PMID:23457417
Estimates of advection and diffusion in the Potomac estuary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, A.J.
1976-01-01
A two-layered dispersion model, suitable for application to partially-mixed estuaries, has been developed to provide hydrological interpretation of the results of biological sampling. The model includes horizontal and vertical advection plus both horizontal and vertical diffusion. A pseudo-geostrophic method, which includes a damping factor to account for internal eddy friction, is used to estimate the horizontal advective fluxes and the results are compared with field observations. A salt balance model is then used to estimate the effective diffusivities in the Potomac estuary during the Spring of 1974.
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
Wang, Wei; Albert, Jeffrey M
2017-08-01
An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.
Global Aerosol Direct Radiative Effect From CALIOP and C3M
NASA Technical Reports Server (NTRS)
Winker, Dave; Kato, Seiji; Tackett, Jason
2015-01-01
Aerosols are responsible for the largest uncertainties in current estimates of climate forcing. These uncertainties are due in part to the limited abilities of passive sensors to retrieve aerosols in cloudy skies. We use a dataset which merges CALIOP observations together with other A-train observations to estimate aerosol radiative effects in cloudy skies as well as in cloud-free skies. The results can be used to quantify the reduction of aerosol radiative effects in cloudy skies relative to clear skies and to reduce current uncertainties in aerosol radiative effects.
Global Aerosol Direct Radiative Effect from CALIOP and C3M
NASA Technical Reports Server (NTRS)
Winker, Dave; Kato, Seiji; Tackett, Jason
2015-01-01
Aerosols are responsible for the largest uncertainties in current estimates of climate forcing. These uncertainties are due in part to the limited abilities of passive sensors to retrieve aerosols in cloudy skies. We use a dataset which merges CALIOP observations together with other A-train observations to estimate aerosol radiative effects in cloudy skies as well as in cloud-free skies. The results can be used to quantify the reduction of aerosol radiative effects in cloudy skies relative to clear skies and to reduce current uncertainties in aerosol radiative effects.
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
Seo, Joohyun; Pietrangelo, Sabino J; Sodini, Charles G; Lee, Hae-Seung
2018-05-01
This paper details unfocused imaging using single-element ultrasound transducers for motion tolerant arterial blood pressure (ABP) waveform estimation. The ABP waveform is estimated based on pulse wave velocity and arterial pulsation through Doppler and M-mode ultrasound. This paper discusses approaches to mitigate the effect of increased clutter due to unfocused imaging on blood flow and diameter waveform estimation. An intensity reduction model (IRM) estimator is described to track the change of diameter, which outperforms a complex cross-correlation model (C3M) estimator in low contrast environments. An adaptive clutter filtering approach is also presented, which reduces the increased Doppler angle estimation error due to unfocused imaging. Experimental results in a flow phantom demonstrate that flow velocity and diameter waveforms can be reliably measured with wide lateral offsets of the transducer position. The distension waveform estimated from human carotid M-mode imaging using the IRM estimator shows physiological baseline fluctuations and 0.6-mm pulsatile diameter change on average, which is within the expected physiological range. These results show the feasibility of this low cost and portable ABP waveform estimation device.
Use of logistic regression for modelling risk factors: with application to non-melanoma skin cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitaliano, P.P.
Logistic regression was used to estimate the relative risk of basal and squamous skin cancer for such factors as cumulative lifetime solar exposure, age, complexion, and tannability. In previous reports, a subject's exposure was estimated indirectly, by latitude, or by the number of sun days in a subject's habitat. In contrast, these results are based on interview data gathered for each subject. A relatively new technique was used to estimate relative risk by controlling for confounding and testing for effect modification. A linear effect for the relative risk of cancer versus exposure was found. Tannability was shown to be amore » more important risk factor than complexion. This result is consistent with the work of Silverstone and Searle.« less
Estimating the volume of glaciers in the Himalayan-Karakoram region using different methods
NASA Astrophysics Data System (ADS)
Frey, H.; Machguth, H.; Huss, M.; Huggel, C.; Bajracharya, S.; Bolch, T.; Kulkarni, A.; Linsbauer, A.; Salzmann, N.; Stoffel, M.
2014-12-01
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km2. An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km3, depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.
ACUTE TO CHRONIC ESTIMATION SOFTWARE FOR WINDOWS
Chronic No-Observed Effect Concentrations (NOEC) are commonly determined by either using acute-to-chronic ratios or by performing an ANOVA on chronic test data; both require lengthy and expensive chronic test results. Acute-to-Chronic Estimation (ACE) software was developed to p...
Anglemyer, Andrew; Horvath, Hacsi T; Bero, Lisa
2014-04-29
Researchers and organizations often use evidence from randomized controlled trials (RCTs) to determine the efficacy of a treatment or intervention under ideal conditions. Studies of observational designs are often used to measure the effectiveness of an intervention in 'real world' scenarios. Numerous study designs and modifications of existing designs, including both randomized and observational, are used for comparative effectiveness research in an attempt to give an unbiased estimate of whether one treatment is more effective or safer than another for a particular population.A systematic analysis of study design features, risk of bias, parameter interpretation, and effect size for all types of randomized and non-experimental observational studies is needed to identify specific differences in design types and potential biases. This review summarizes the results of methodological reviews that compare the outcomes of observational studies with randomized trials addressing the same question, as well as methodological reviews that compare the outcomes of different types of observational studies. To assess the impact of study design (including RCTs versus observational study designs) on the effect measures estimated.To explore methodological variables that might explain any differences identified.To identify gaps in the existing research comparing study designs. We searched seven electronic databases, from January 1990 to December 2013.Along with MeSH terms and relevant keywords, we used the sensitivity-specificity balanced version of a validated strategy to identify reviews in PubMed, augmented with one term ("review" in article titles) so that it better targeted narrative reviews. No language restrictions were applied. We examined systematic reviews that were designed as methodological reviews to compare quantitative effect size estimates measuring efficacy or effectiveness of interventions tested in trials with those tested in observational studies. Comparisons included RCTs versus observational studies (including retrospective cohorts, prospective cohorts, case-control designs, and cross-sectional designs). Reviews were not eligible if they compared randomized trials with other studies that had used some form of concurrent allocation. In general, outcome measures included relative risks or rate ratios (RR), odds ratios (OR), hazard ratios (HR). Using results from observational studies as the reference group, we examined the published estimates to see whether there was a relative larger or smaller effect in the ratio of odds ratios (ROR).Within each identified review, if an estimate comparing results from observational studies with RCTs was not provided, we pooled the estimates for observational studies and RCTs. Then, we estimated the ratio of ratios (risk ratio or odds ratio) for each identified review using observational studies as the reference category. Across all reviews, we synthesized these ratios to get a pooled ROR comparing results from RCTs with results from observational studies. Our initial search yielded 4406 unique references. Fifteen reviews met our inclusion criteria; 14 of which were included in the quantitative analysis.The included reviews analyzed data from 1583 meta-analyses that covered 228 different medical conditions. The mean number of included studies per paper was 178 (range 19 to 530).Eleven (73%) reviews had low risk of bias for explicit criteria for study selection, nine (60%) were low risk of bias for investigators' agreement for study selection, five (33%) included a complete sample of studies, seven (47%) assessed the risk of bias of their included studies,Seven (47%) reviews controlled for methodological differences between studies,Eight (53%) reviews controlled for heterogeneity among studies, nine (60%) analyzed similar outcome measures, and four (27%) were judged to be at low risk of reporting bias.Our primary quantitative analysis, including 14 reviews, showed that the pooled ROR comparing effects from RCTs with effects from observational studies was 1.08 (95% confidence interval (CI) 0.96 to 1.22). Of 14 reviews included in this analysis, 11 (79%) found no significant difference between observational studies and RCTs. One review suggested observational studies had larger effects of interest, and two reviews suggested observational studies had smaller effects of interest.Similar to the effect across all included reviews, effects from reviews comparing RCTs with cohort studies had a pooled ROR of 1.04 (95% CI 0.89 to 1.21), with substantial heterogeneity (I(2) = 68%). Three reviews compared effects of RCTs and case-control designs (pooled ROR: 1.11 (95% CI 0.91 to 1.35)).No significant difference in point estimates across heterogeneity, pharmacological intervention, or propensity score adjustment subgroups were noted. No reviews had compared RCTs with observational studies that used two of the most common causal inference methods, instrumental variables and marginal structural models. Our results across all reviews (pooled ROR 1.08) are very similar to results reported by similarly conducted reviews. As such, we have reached similar conclusions; on average, there is little evidence for significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design, heterogeneity, or inclusion of studies of pharmacological interventions. Factors other than study design per se need to be considered when exploring reasons for a lack of agreement between results of RCTs and observational studies. Our results underscore that it is important for review authors to consider not only study design, but the level of heterogeneity in meta-analyses of RCTs or observational studies. A better understanding of how these factors influence study effects might yield estimates reflective of true effectiveness.
NASA Astrophysics Data System (ADS)
Kuroda, S.; Ishii, N.; Morii, T.
2017-12-01
Capillary barriers have been known as the method to protect subsurface regions against infiltration from soil surface. It is caused by essentially heterogeneous structure in permeability or soil physical property and produce non-uniform infiltration process then, in order to estimate the actual situation of the capillary barrier effect, the site-characterization with imaging technique like geophysical prospecting is effective. In this study, we examine the applicability of GPR to characterization for capillary barriers. We built a sand box with 90x340x90cm in which a thin high-permeable gravel layer was embedded as a capillary barrier. We conducted an infiltration test in the sand box using porous tube array for irrigation. It is expected to lead to non-uniform flow of soil water induced by capillary barrier effects. We monitored this process by various types of GPR measurements, including time-lapsed common offset profiling (COP) with multi- frequency antenna and transmission measurements like cross-borehole radar. At first, we conducted GPR common-offset survey. It could show the depth of capillary barrier in sand box. After that we conducted the infiltration test and GPR monitoring for infiltration process. GPR profiles can detect the wetting front and estimate water content change in the soil layer above the capillary barrier. From spatial change in these results we can estimate the effect of capillary barrier and the zone where the break through occur or not. Based on these results, we will discuss the applicability of GPR for monitoring the phenomena around the capillary barrier of soil. At first, we conducted GPR common-offset survey. It could show the depth of capillary barrier in sand box. After that we conducted the infiltration test and GPR monitoring for infiltration process. GPR profiles can detect the wetting front and estimate water content change in the soil layer above the capillary barrier. From spatial change in these results we can estimate the effect of capillary barrier and the zone where the break through occur. Based on these results, we will discuss the applicability of GPR for monitoring the phenomena around the capillary barrier of soil.
Jaciw, Andrew P
2016-06-01
Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.
Leadership training design, delivery, and implementation: A meta-analysis.
Lacerenza, Christina N; Reyes, Denise L; Marlow, Shannon L; Joseph, Dana L; Salas, Eduardo
2017-12-01
Recent estimates suggest that although a majority of funds in organizational training budgets tend to be allocated to leadership training (Ho, 2016; O'Leonard, 2014), only a small minority of organizations believe their leadership training programs are highly effective (Schwartz, Bersin, & Pelster, 2014), calling into question the effectiveness of current leadership development initiatives. To help address this issue, this meta-analysis estimates the extent to which leadership training is effective and identifies the conditions under which these programs are most effective. In doing so, we estimate the effectiveness of leadership training across four criteria (reactions, learning, transfer, and results; Kirkpatrick, 1959) using only employee data and we examine 15 moderators of training design and delivery to determine which elements are associated with the most effective leadership training interventions. Data from 335 independent samples suggest that leadership training is substantially more effective than previously thought, leading to improvements in reactions (δ = .63), learning (δ = .73), transfer (δ = .82), and results (δ = .72), the strength of these effects differs based on various design, delivery, and implementation characteristics. Moderator analyses support the use of needs analysis, feedback, multiple delivery methods (especially practice), spaced training sessions, a location that is on-site, and face-to-face delivery that is not self-administered. Results also suggest that the content of training, attendance policy, and duration influence the effectiveness of the training program. Practical implications for training development and theoretical implications for leadership and training literatures are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Xiao, Q.; Liu, Y.; Strickland, M. J.; Chang, H. H.; Kan, H.
2017-12-01
Background: Satellite remote sensing data have been employed for air pollution exposure assessment, with the intent of better characterizing exposure spatio-temproal variations. However, non-random missingness in satellite data may lead to exposure error. Objectives: We explored the differences in health effect estimates due to different exposure metrics, with and without satellite data, when analyzing the associations between maternal PM2.5 exposure and birth outcomes. Methods: We obtained birth registration records of 132,783 singleton live births during 2011-2014 in Shanghai. Trimester-specific and total pregnancy exposures were estimated from satellite PM2.5 predictions with missingness, gap-filled satellite PM2.5 predictions with complete coverage and regional average PM2.5 measurements from monitoring stations. Linear regressions estimated associations between birth weight and maternal PM2.5 exposure. Logistic regressions estimated associations between preterm birth and the first and second trimester exposure. Discrete-time models estimated third trimester and total pregnancy associations with preterm birth. Effect modifications by maternal age and parental education levels were investigated. Results: we observed statistically significant associations between maternal PM2.5 exposure during all exposure windows and adverse birth outcomes. A 10 µg/m3 increase in pregnancy PM2.5 exposure was associated with a 12.85 g (95% CI: 18.44, 7.27) decrease in birth weight for term births, and a 27% (95% CI: 20%, 36%) increase in the risk of preterm birth. Greater effects were observed between first and third trimester exposure and birth weight, as well as between first trimester exposure and preterm birth. Mothers older than 35 years and without college education tended to have higher associations with preterm birth. Conclusions: Gap-filled satellite data derived PM2.5 exposure estimates resulted in reduced exposure error and more precise health effect estimates.
Joint Effects of Ambient Air Pollutants on Pediatric Asthma ...
Background: Because ambient air pollution exposure occurs in the form of mixtures, consideration of joint effects of multiple pollutants may advance our understanding of air pollution health effects. Methods: We assessed the joint effect of selected ambient air pollutant combinations (groups of oxidant, secondary, traffic, power plant, and criteria pollutants constructed using combinations of criteria gases, fine particulate matter (PM2.5) and PM2.5 components) on warm season pediatric asthma emergency department (ED) visits in Atlanta during 1998-2004. Joint effects were assessed using multi-pollutant Poisson generalized linear models controlling for time trends, meteorology and daily non-asthma respiratory ED visit counts. Rate ratios (RR) were calculated for the combined effect of an interquartile-range increment in the concentration of each pollutant. Results: Increases in all of the selected pollutant combinations were associated with increases in pediatric asthma ED visits [e.g., joint effect rate ratio=1.13 (95% confidence interval 1.06-1.21) for criteria pollutants (including ozone, carbon monoxide, nitrogen dioxide, sulfur dioxide, and PM2.5)]. Joint effect estimates were smaller than estimates calculated based on summing results from single-pollutant models, due to control for confounding. Compared with models without interactions, joint effect estimates from models including first-order pollutant interactions were similar for oxidant a
Gore, Christopher J; Sharpe, Ken; Garvican-Lewis, Laura A; Saunders, Philo U; Humberstone, Clare E; Robertson, Eileen Y; Wachsmuth, Nadine B; Clark, Sally A; McLean, Blake D; Friedmann-Bette, Birgit; Neya, Mitsuo; Pottgiesser, Torben; Schumacher, Yorck O; Schmidt, Walter F
2013-01-01
Objective To characterise the time course of changes in haemoglobin mass (Hbmass) in response to altitude exposure. Methods This meta-analysis uses raw data from 17 studies that used carbon monoxide rebreathing to determine Hbmass prealtitude, during altitude and postaltitude. Seven studies were classic altitude training, eight were live high train low (LHTL) and two mixed classic and LHTL. Separate linear-mixed models were fitted to the data from the 17 studies and the resultant estimates of the effects of altitude used in a random effects meta-analysis to obtain an overall estimate of the effect of altitude, with separate analyses during altitude and postaltitude. In addition, within-subject differences from the prealtitude phase for altitude participant and all the data on control participants were used to estimate the analytical SD. The ‘true’ between-subject response to altitude was estimated from the within-subject differences on altitude participants, between the prealtitude and during-altitude phases, together with the estimated analytical SD. Results During-altitude Hbmass was estimated to increase by ∼1.1%/100 h for LHTL and classic altitude. Postaltitude Hbmass was estimated to be 3.3% higher than prealtitude values for up to 20 days. The within-subject SD was constant at ∼2% for up to 7 days between observations, indicative of analytical error. A 95% prediction interval for the ‘true’ response of an athlete exposed to 300 h of altitude was estimated to be 1.1–6%. Conclusions Camps as short as 2 weeks of classic and LHTL altitude will quite likely increase Hbmass and most athletes can expect benefit. PMID:24282204
A Gompertzian model with random effects to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati
2015-05-15
In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.
NASA Technical Reports Server (NTRS)
Tinling, B. E.
1977-01-01
Estimates of the effectiveness of a model following type control system in reducing the roll excursion due to a wake vortex encounter were obtained from single degree of freedom computations with inputs derived from the results of wind tunnel, flight, and simulation experiments. The analysis indicates that the control power commanded by the automatic system must be roughly equal to the vortex induced roll acceleration if effective limiting of the maximum bank angle is to be achieved.
Brodin, Nina; Lohela-Karlsson, Malin; Swärdh, Emma; Opava, Christina H
2015-01-01
To describe cost-effectiveness of the Physical Activity in Rheumatoid Arthritis (PARA) study intervention. Costs were collected and estimated retrospectively. Cost-effectiveness was calculated based on the intervention cost per patient with respect to change in health status (EuroQol global visual analog scale--EQ-VAS and EuroQol--EQ-5D) and activity limitation (Health assessment questionnaire - HAQ) using cost-effectiveness- and cost-minimization analyses. Total cost of the one-year intervention program was estimated to be €67 317 or €716 per participant. Estimated difference in total societal cost between the intervention (IG) and control (CG) was €580 per participant. Incremental cost-effectiveness ratio (ICER) for one point (1/100) of improvement in EQ-VAS was estimated to be €116. By offering the intervention to more affected participants in the IG compared to less affected participants, 15.5 extra points of improvement in EQ-VAS and 0.13 points of improvement on HAQ were gained at the same cost. "Ordinary physiotherapy" was most cost-effective with regard to EQ-5D. The intervention resulted in improved effect in health status for the IG with a cost of €116 per extra point in VAS. The intervention was cost-effective if targeted towards a subgroup of more affected patients when evaluating the effect using VAS and HAQ. The physical activity coaching intervention resulted in an improved effect on VAS for the intervention group, to a higher cost. In order to maximize cost-effectiveness, this type of physical activity coaching intervention should be targeted towards patients largely affected by their RA. The intervention is cost-effective from the patients' point of view, but not from that of the general population.
Gray, Brian R.; Holland, Mark D.; Yi, Feng; Starcevich, Leigh Ann Harrod
2013-01-01
Site occupancy models are commonly used by ecologists to estimate the probabilities of species site occupancy and of species detection. This study addresses the influence on site occupancy and detection estimates of variation in species availability among surveys within sites. Such variation in availability may result from temporary emigration, nonavailability of the species for detection, and sampling sites spatially when species presence is not uniform within sites. We demonstrate, using Monte Carlo simulations and aquatic vegetation data, that variation in availability and heterogeneity in the probability of availability may yield biases in the expected values of the site occupancy and detection estimates that have traditionally been associated with low-detection probabilities and heterogeneity in those probabilities. These findings confirm that the effects of availability may be important for ecologists and managers, and that where such effects are expected, modification of sampling designs and/or analytical methods should be considered. Failure to limit the effects of availability may preclude reliable estimation of the probability of site occupancy.
Nonlinear ionic transport through microstructured solid electrolytes: homogenization estimates
NASA Astrophysics Data System (ADS)
Curto Sillamoni, Ignacio J.; Idiart, Martín I.
2016-10-01
We consider the transport of multiple ionic species by diffusion and migration through microstructured solid electrolytes in the presence of strong electric fields. The assumed constitutive relations for the constituent phases follow from convex energy and dissipation potentials which guarantee thermodynamic consistency. The effective response is heuristically deduced from a multi-scale convergence analysis of the relevant field equations. The resulting homogenized response involves an effective dissipation potential per species. Each potential is mathematically akin to that of a standard nonlinear heterogeneous conductor. A ‘linear-comparison’ homogenization technique is then used to generate estimates for these nonlinear potentials in terms of available estimates for corresponding linear conductors. By way of example, use is made of the Maxwell-Garnett and effective-medium linear approximations to generate estimates for two-phase systems with power-law dissipation. Explicit formulas are given for some limiting cases. In the case of threshold-type behavior, the estimates exhibit non-analytical dilute limits and seem to be consistent with fields localized in low energy paths.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.
2016-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
Alonso, Jordi; Vilagut, Gemma; Chatterji, Somnath; Heeringa, Steven; Schoenbaum, Michael; Üstün, T. Bedirhan; Rojas-Farreras, Sonia; Angermeyer, Matthias; Bromet, Evelyn; Bruffaerts, Ronny; de Girolamo, Giovanni; Gureje, Oye; Haro, Josep Maria; Karam, Aimee N.; Kovess, Viviane; Levinson, Daphna; Liu, Zhaorui; Mora, Maria Elena Medina; Ormel, J.; Posada-Villa, Jose; Uda, Hidenori; Kessler, Ronald C.
2010-01-01
Background The methodology commonly used to estimate disease burden, featuring ratings of severity of individual conditions, has been criticized for ignoring comorbidity. A methodology that addresses this problem is proposed and illustrated here with data from the WHO World Mental Health Surveys. Although the analysis is based on self-reports about one’s own conditions in a community survey, the logic applies equally well to analysis of hypothetical vignettes describing comorbid condition profiles. Methods Face-to-face interviews in 13 countries (six developing, nine developed; n = 31,067; response rate = 69.6%) assessed 10 classes of chronic physical and 9 of mental conditions. A visual analog scale (VAS) was used to assess overall perceived health. Multiple regression analysis with interactions for comorbidity was used to estimate associations of conditions with VAS. Simulation was used to estimate condition-specific effects. Results The best-fitting model included condition main effects and interactions of types by numbers of conditions. Neurological conditions, insomnia, and major depression were rated most severe. Adjustment for comorbidity reduced condition-specific estimates with substantial between-condition variation (.24–.70 ratios of condition-specific estimates with and without adjustment for comorbidity). The societal-level burden rankings were quite different from the individual-level rankings, with the highest societal-level rankings associated with conditions having high prevalence rather than high individual-level severity. Conclusions Plausible estimates of disorder-specific effects on VAS can be obtained using methods that adjust for comorbidity. These adjustments substantially influence condition-specific ratings. PMID:20553636
Cummings, P
2002-01-01
Objective: Estimates of any protective effect of seat belts could be exaggerated if some crash survivors falsely claimed to police that they were belted in order to avoid a fine. The aim of this study was to determine whether estimates of seat belt effectiveness differed when based on belt use as recorded by the police and belt use determined by trained crash investigators. Design: Matched cohort study. Setting: United States. Subjects: Adult driver-passenger pairs in the same vehicle with at least one death (n=1689) sampled from crashes during 1988–2000; data from the National Accident Sampling System Crashworthiness Data System. Main outcome measure: Risk ratio for death among belted occupants compared with those not belted. Results: Trained investigators determined post-crash seat belt use by vehicle inspections for 92% of the occupants, confidential interviews with survivors for 5%, and medical or autopsy reports for 3%. Using this information, the adjusted risk ratio for belted persons was 0.36 (95% confidence interval 0.29 to 0.46). The risk ratio was also 0.36 using police reported belt use for the same crashes. Conclusions: Estimates of seat belt effects based upon police data were not substantially different from estimates which used data obtained by trained crash investigators who were not police officers. These results were from vehicles in which at least one front seat occupant died; these findings may not apply to estimates which use data from crashes without a death. PMID:12460976
Le, Thao N; Stockdale, Gary
2011-10-01
The purpose of this study was to examine the effects of school demographic factors and youth's perception of discrimination on delinquency in adolescence and into young adulthood for African American, Asian, Hispanic, and white racial/ethnic groups. Using data from the National Longitudinal Study of Adolescent Health (Add Health), models testing the effect of school-related variables on delinquency trajectories were evaluated for the four racial/ethnic groups using Mplus 5.21 statistical software. Results revealed that greater student ethnic diversity and perceived discrimination, but not teacher ethnic diversity, resulted in higher initial delinquency estimates at 13 years of age for all groups. However, except for African Americans, having a greater proportion of female teachers in the school decreased initial delinquency estimates. For African Americans and whites, a larger school size also increased the initial estimates. Additionally, lower social-economic status increased the initial estimates for whites, and being born in the United States increased the initial estimates for Asians and Hispanics. Finally, regardless of the initial delinquency estimate at age 13 and the effect of the school variables, all groups eventually converged to extremely low delinquency in young adulthood, at the age of 21 years. Educators and public policy makers seeking to prevent and reduce delinquency can modify individual risks by modifying characteristics of the school environment. Policies that promote respect for diversity and intolerance toward discrimination, as well as training to help teachers recognize the precursors and signs of aggression and/or violence, may also facilitate a positive school environment, resulting in lower delinquency. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom P.
This report recalculates the estimated relationship between vehicle mass and societal fatality risk, using alternative groupings by vehicle weight, to test whether the trend of decreasing fatality risk from mass reduction as case vehicle mass increases, holds over smaller increments of the range in case vehicle masses. The NHTSA baseline regression model estimates the relationship using for two weight groups for cars and light trucks; we re-estimated the mass reduction coefficients using four, six, and eight bins of vehicle mass. The estimated effect of mass reduction on societal fatality risk was not consistent over the range in vehicle masses inmore » these weight bins. These results suggest that the relationship indicated by the NHTSA baseline model is a result of other, unmeasured attributes of the mix of vehicles in the lighter vs. heavier weight bins, and not necessarily the result of a correlation between mass reduction and societal fatality risk. An analysis of the average vehicle, driver, and crash characteristics across the various weight groupings did not reveal any strong trends that might explain the lack of a consistent trend of decreasing fatality risk from mass reduction in heavier vehicles.« less
Effects of Sample Selection on Estimates of Economic Impacts of Outdoor Recreation
Donald B.K. English
1997-01-01
Estimates of the economic impacts of recreation often come from spending data provided by a self-selected subset of a random sample of site visitors. The subset is frequently less than half the onsite sample. Biased vectors of per trip spending and impact estimates can result if self-selection is related to spending pattctns, and proper corrective procedures arc not...
Estimating energy-momentum and angular momentum near null infinity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helfer, Adam D.
2010-04-15
The energy-momentum and angular momentum contained in a spacelike two-surface of spherical topology are estimated by joining the two-surface to null infinity via an approximate no-incoming-radiation condition. The result is a set of gauge-invariant formulas for energy-momentum and angular momentum which should be applicable to much numerical work; it also gives estimates of the finite-size effects.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Perin, Jamie; Walker, Neff
2015-01-01
Background Recent steep declines in child mortality have been attributed in part to increased use of contraceptives and the resulting change in fertility behaviour, including an increase in the time between births. Previous observational studies have documented strong associations between short birth spacing and an increase in the risk of neonatal, infant, and under-five mortality, compared to births with longer preceding birth intervals. In this analysis, we compare two methods to estimate the association between short birth intervals and mortality risk to better inform modelling efforts linking family planning and mortality in children. Objectives Our goal was to estimate the mortality risk for neonates, infants, and young children by preceding birth space using household survey data, controlling for mother-level factors and to compare the results to those from previous analyses with survey data. Design We assessed the potential for confounding when estimating the relative mortality risk by preceding birth interval and estimated mortality risk by birth interval in four categories: less than 18 months, 18–23 months, 24–35 months, and 36 months or longer. We estimated the relative risks among women who were 35 and older at the time of the survey with two methods: in a Cox proportional hazards regression adjusting for potential confounders and also by stratifying Cox regression by mother, to control for all factors that remain constant over a woman's childbearing years. We estimated the overall effects for birth spacing in a meta-analysis with random survey effects. Results We identified several factors known for their associations with neonatal, infant, and child mortality that are also associated with preceding birth interval. When estimating the effect of birth spacing on mortality, we found that regression adjustment for these factors does not substantially change the risk ratio for short birth intervals compared to an unadjusted mortality ratio. For birth intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18–2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52–1.63), a decline of almost one-third in the effect on neonatal mortality. Conclusions Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality. PMID:26562139
Groschen, George E.
1985-01-01
Two simulations of the projected pumping a low estimate, as much as 46.2 cubic feet per second during 2011-20; and a high estimate, as much as 60.0 cubic feet per second during the same period indicate that no further regional water-quality deterioration is likely to occur. Many important properties and conditions are estimated from poor or insufficient field data, and possible ranges of these properties and conditions are tested. In spite of the errors and data deficiencies, the results are based on the best estimates currently available. The reliability of the conclusions rests on the adequacy of the data and the demonstrated sensitivity of the model results to errors in estimates of these properties.
Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik
2015-12-01
Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.
Per capita alcohol consumption and suicide mortality in a panel of US states from 1950 to 2002
Kerr, William C.; Subbaraman, Meenakshi; Ye, Yu
2011-01-01
Introduction and Aims The relationship between per capita alcohol consumption and suicide rates has been found to vary in significance and magnitude across countries. This study utilizes a panel of time-series measures from the US states to estimate the effects of changes in current and lagged alcohol sales on suicide mortality risk. Design and Methods Generalized least squares estimation utilized 53 years of data from 48 US states or state groups to estimate relationships between total and beverage-specific alcohol consumption measures and age-standardized suicide mortality rates in first-differenced semi-logged models. Results An additional liter of ethanol from total alcohol sales was estimated to increase suicide rates by 2.3% in models utilizing a distributed lag specification while no effect was found in models including only current alcohol consumption. A similar result is found for men, while for women both current and distributed lag measures were found to be significantly related to suicide rates with an effect of about 3.2% per liter from current and 5.8% per liter from the lagged measure. Beverage-specific models indicate that spirits is most closely linked with suicide risk for women while beer and wine are for men. Unemployment rates are consistently positively related to suicide rates. Discussion and Conclusions Results suggest that chronic effects, potentially related to alcohol abuse and dependence, are the main source of alcohol’s impact on suicide rates in the US for men and are responsible for about half of the effect for women. PMID:21896069
Cost Effectiveness of Malaria Interventions from Preelimination through Elimination: a Study in Iran
Rezaei-Hemami, Mohsen; Akbari-Sari, Ali; Raiesi, Ahmad; Vatandoost, Hassan; Majdzadeh, Reza
2014-01-01
Background Malaria still is considered as a public health problem in Iran. The aim of the National Malaria Control Department is to reach the elimination by 2024. By decreasing the number of malaria cases in preelimination phase the cost effectiveness of malaria interventions decreases considerably. This study estimated the cost effectiveness of various strategies to combat malaria in preelimination and elimination phases in Iran. Methods: running costs of the interventions at each level of intervention was estimated by using evidence and expert opinions. The effect of each intervention was estimated using the documentary evidence available and expert opinions. Using a point estimate and distribution of each variable the sensitivity was evaluated with the Monte Carlo method. Results: The most cost-effective interventions were insecticide treated net (ITN), larviciding, surveillance for diagnosis and treatment of patients less than 24 hours, and indoor residual spraying (IRS) respectively, No related evidence found for the effectiveness of the border facilities. Conclusion: This study showed that interventions in the elimination phase of malaria have low cost effectiveness in Iran like many other countries. However ITN is the most cost effective intervention among the available interventions. PMID:25629064
Sucunza, Federico; Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16-46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana.
Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N.
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16–46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana. PMID:29534086
Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation
NASA Astrophysics Data System (ADS)
Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.
2016-12-01
We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.
Crown, William; Chang, Jessica; Olson, Melvin; Kahler, Kristijan; Swindle, Jason; Buzinec, Paul; Shah, Nilay; Borah, Bijan
2015-09-01
Missing data, particularly missing variables, can create serious analytic challenges in observational comparative effectiveness research studies. Statistical linkage of datasets is a potential method for incorporating missing variables. Prior studies have focused upon the bias introduced by imperfect linkage. This analysis uses a case study of hepatitis C patients to estimate the net effect of statistical linkage on bias, also accounting for the potential reduction in missing variable bias. The results show that statistical linkage can reduce bias while also enabling parameter estimates to be obtained for the formerly missing variables. The usefulness of statistical linkage will vary depending upon the strength of the correlations of the missing variables with the treatment variable, as well as the outcome variable of interest.
Approach for Estimating Exposures and Incremental Health ...
Approach for Estimating Exposures and Incremental Health Effects from Lead During Renovation, Repair, and Painting Activities in Public and Commercial Buildings” (Technical Approach Document). Also available for public review and comment are two supplementary documents: the detailed appendices for the Technical Approach Document and a supplementary report entitled “Developing a Concentration-Response Function for Pb Exposure and Cardiovascular Disease-Related Mortality.” Together, these documents describes an analysis for estimating exposures and incremental health effects created by renovations of public and commercial buildings (P&CBs). This analysis could be used to identify and evaluate hazards from renovation, repair, and painting activities in P&CBs. A general overview of how this analysis can be used to inform EPA’s hazard finding is described in the Framework document that was previously made available for public comment (79 FR 31072; FRL9910-44). The analysis can be used in any proposed rulemaking to estimate the reduction in deleterious health effects that would result from any proposed regulatory requirements to mitigate exposure from P&CB renovation activities. The Technical Approach Document describes in detail how the analyses under this approach have been performed and presents the results – expected changes in blood lead levels and health effects due to lead exposure from renovation activities.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Steinmetz, G. G.
1972-01-01
A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.
Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Thaler, Kylie; Nussbaumer, Barbara; Sommer, Isolde; Lohr, Kathleen N
2016-01-01
The objective of our study was to use a diverse sample of medical interventions to assess empirically whether first trials rendered substantially different treatment effect estimates than reliable, high-quality bodies of evidence. We used a meta-epidemiologic study design using 100 randomly selected bodies of evidence from Cochrane reports that had been graded as high quality of evidence. To determine the concordance of effect estimates between first and subsequent trials, we applied both quantitative and qualitative approaches. For quantitative assessment, we used Lin's concordance correlation and calculated z-scores; to determine the magnitude of differences of treatment effects, we calculated standardized mean differences (SMDs) and ratios of relative risks. We determined qualitative concordance based on a two-tiered approach incorporating changes in statistical significance and magnitude of effect. First trials both overestimated and underestimated the true treatment effects in no discernible pattern. Nevertheless, depending on the definition of concordance, effect estimates of first trials were concordant with pooled subsequent studies in at least 33% but up to 50% of comparisons. The pooled magnitude of change as bodies of evidence advanced from single trials to high-quality bodies of evidence was 0.16 SMD [95% confidence interval (CI): 0.12, 0.21]. In 80% of comparisons, the difference in effect estimates was smaller than 0.5 SMDs. In first trials with large treatment effects (>0.5 SMD), however, estimates of effect substantially changed as new evidence accrued (mean change 0.68 SMD; 95% CI: 0.50, 0.86). Results of first trials often change, but the magnitude of change, on average, is small. Exceptions are first trials that present large treatment effects, which often dissipate as new evidence accrues. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
Aynekulu, Ermias; Pitkänen, Sari; Packalen, Petteri
2016-01-01
It has been suggested that above-ground biomass (AGB) inventories should include tree height (H), in addition to diameter (D). As H is a difficult variable to measure, H-D models are commonly used to predict H. We tested a number of approaches for H-D modelling, including additive terms which increased the complexity of the model, and observed how differences in tree-level predictions of H propagated to plot-level AGB estimations. We were especially interested in detecting whether the choice of method can lead to bias. The compared approaches listed in the order of increasing complexity were: (B0) AGB estimations from D-only; (B1) involving also H obtained from a fixed-effects H-D model; (B2) involving also species; (B3) including also between-plot variability as random effects; and (B4) involving multilevel nested random effects for grouping plots in clusters. In light of the results, the modelling approach affected the AGB estimation significantly in some cases, although differences were negligible for some of the alternatives. The most important differences were found between including H or not in the AGB estimation. We observed that AGB predictions without H information were very sensitive to the environmental stress parameter (E), which can induce a critical bias. Regarding the H-D modelling, the most relevant effect was found when species was included as an additive term. We presented a two-step methodology, which succeeded in identifying the species for which the general H-D relation was relevant to modify. Based on the results, our final choice was the single-level mixed-effects model (B3), which accounts for the species but also for the plot random effects reflecting site-specific factors such as soil properties and degree of disturbance. PMID:27367857
Potential distribution of dengue fever under scenarios of climate change and economic development.
Aström, Christofer; Rocklöv, Joacim; Hales, Simon; Béguin, Andreas; Louis, Valerie; Sauerborn, Rainer
2012-12-01
Dengue fever is the most important viral vector-borne disease with ~50 million cases per year globally. Previous estimates of the potential effect of global climate change on the distribution of vector-borne disease have not incorporated the effect of socioeconomic factors, which may have biased the results. We describe an empirical model of the current geographic distribution of dengue, based on the independent effects of climate and gross domestic product per capita (GDPpc, a proxy for socioeconomic development). We use the model, along with scenario-based projections of future climate, economic development, and population, to estimate populations at risk of dengue in the year 2050. We find that both climate and GDPpc influence the distribution of dengue. If the global climate changes as projected but GDPpc remained constant, the population at risk of dengue is estimated to increase by about 0.28 billion in 2050. However, if both climate and GDPpc change as projected, we estimate a decrease of 0.12 billion in the population at risk of dengue in 2050. Empirically, the geographic distribution of dengue is strongly dependent on both climatic and socioeconomic variables. Under a scenario of constant GDPpc, global climate change results in a modest but important increase in the global population at risk of dengue. Under scenarios of high GDPpc, this adverse effect of climate change is counteracted by the beneficial effect of socioeconomic development.
Anderson, Craig L.
2009-01-01
Objectives. We estimated the effectiveness of child restraints in preventing death during motor vehicle collisions among children 3 years or younger. Methods. We conducted a matched cohort study using Fatality Analysis Reporting System data from 1996 to 2005. We estimated death risk ratios using conditional Poisson regression, bootstrapping, multiple imputation, and a sensitivity analysis of misclassification bias. We examined possible effect modification by selected factors. Results. The estimated death risk ratios comparing child safety seats with no restraint were 0.27 (95% confidence interval [CI] = 0.21, 0.34) for infants, 0.24 (95% CI = 0.19, 0.30) for children aged 1 year, 0.40 (95% CI = 0.32, 0.51) for those aged 2 years, and 0.41 (95% CI = 0.33, 0.52) for those aged 3 years. Estimated safety seat effectiveness was greater during rollover collisions, in rural environments, and in light trucks. We estimated seat belts to be as effective as safety seats in preventing death for children aged 2 and 3 years. Conclusions. Child safety seats are highly effective in reducing the risk of death during severe traffic collisions and generally outperform seat belts. Parents should be encouraged to use child safety seats in favor of seat belts. PMID:19059860
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Child-Centered Play Therapy in the Schools: Review and Meta-Analysis
ERIC Educational Resources Information Center
Ray, Dee C.; Armstrong, Stephen A.; Balkin, Richard S.; Jayne, Kimberly M.
2015-01-01
The authors conducted a meta-analysis and systematic review that examined 23 studies evaluating the effectiveness of child centered play therapy (CCPT) conducted in elementary schools. Meta-analysis results were explored using a random effects model for mean difference and mean gain effect size estimates. Results revealed statistically significant…
Atsou, Kokuvi; Chouaid, Christos; Hejblum, Gilles
2011-01-01
The medico-economic impact of smoking cessation considering a smoking patient with chronic obstructive pulmonary disease (COPD) is poorly documented. Here, considering a COPD smoking patient, the specific burden of continuous smoking was estimated, as well as the effectiveness and the cost-effectiveness of smoking cessation. A multi-state Markov model adopting society's perspective was developed. Simulated cohorts of English COPD patients who are active smokers (all severity stages combined or patients with the same initial severity stage) were compared to identical cohorts of patients who quit smoking at cohort initialization. Life expectancy, quality adjusted life-years (QALY), disease-related costs, and incremental cost-effectiveness ratio (ICER: £/QALY) were estimated, considering smoking cessation programs with various possible scenarios of success rates and costs. Sensitivity analyses included the variation of model key parameters. At the horizon of a smoking COPD patient's remaining lifetime, smoking cessation at cohort intitialization, relapses being allowed as observed in practice, would result in gains (mean) of 1.27 life-years and 0.68 QALY, and induce savings of -1824 £/patient in the disease-related costs. The corresponding ICER was -2686 £/QALY. Smoking cessation resulted in 0.72, 0.69, 0.64 and 0.42 QALY respectively gained per mild, moderate, severe, and very severe COPD patient, but was nevertheless cost-effective for mild to severe COPD patients in most scenarios, even when hypothesizing expensive smoking cessation intervention programmes associated with low success rates. Considering a ten-year time horizon, the burden of continuous smoking in English COPD patients was estimated to cost a total of 1657 M£ while 452516 QALY would be simultaneously lost. The study results are a useful support for the setting of smoking cessation programmes specifically targeted to COPD patients.
Analysis of storm-tide impacts from Hurricane Sandy in New York
Schubert, Christopher E.; Busciolano, Ronald J.; Hearn, Paul P.; Rahav, Ami N.; Behrens, Riley; Finkelstein, Jason S.; Monti, Jack; Simonson, Amy E.
2015-07-21
Results of FEMA Hazus Program (HAZUS) flood loss analyses performed for New York counties were compared for extents of storm-tide inundation from Hurricane Sandy mapped (1) pre-storm, (2) on November 11, 2012, and (3) on February 14, 2013. The resulting depictions of estimated total building stock losses document how differing amounts of available USGS data affect the resolution and accuracy of storm-tide inundation extents. Using the most accurate results from the final (February 14, 2013) inundation extent, estimated losses range from $380 million to $5.9 billion for individual New York counties; total estimated aggregate losses are about $23 billion for all New York counties. Quality of the inundation extents used in HAZUS analyses has a substantial effect on final results. These findings can be used to inform future post-storm reconstruction planning and estimation of insurance claims.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Estimation of the chiral magnetic effect considering the magnetic field response of the QGP medium
NASA Astrophysics Data System (ADS)
Feng, Sheng-Qin; Ai, Xin; Pei, Lei; Sun, Fei; Zhong, Yang; Yin, Zhong-Bao
2018-05-01
The magnetic field plays a major role in searching for the chiral magnetic effect in relativistic heavy-ion collisions. If the lifetime of the magnetic field is too short, as predicted by simulations of the field in vacuum, the chiral magnetic effect will be largely suppressed. However, the lifetime of the magnetic field will become longer when the QGP medium response is considered. We give an estimate of the effect, especially considering the magnetic field response of the QGP medium, and compare it with the experimental results for the background-subtracted correlator H at RHIC and LHC energies. The results show that our method explains the experimental results better at the top RHIC energy than at the LHC energy. Supported by National Natural Science Foundation of China (11747115, 11475068), the CCNU-QLPL Innovation Fund (QLPL2016P01) and the Excellent Youth Foundation of Hubei Scientific Committee (2006ABB036)
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Empirical mode decomposition-based facial pose estimation inside video sequences
NASA Astrophysics Data System (ADS)
Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing
2010-03-01
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.
Manca, Andrea; Lambert, Paul C; Sculpher, Mark; Rice, Nigel
2008-01-01
Healthcare cost-effectiveness analysis (CEA) often uses individual patient data (IPD) from multinational randomised controlled trials. Although designed to account for between-patient sampling variability in the clinical and economic data, standard analytical approaches to CEA ignore the presence of between-location variability in the study results. This is a restrictive limitation given that countries often differ in factors that could affect the results of CEAs, such as the availability of healthcare resources, their unit costs, clinical practice, and patient case-mix. We advocate the use of Bayesian bivariate hierarchical modelling to analyse multinational cost-effectiveness data. This analytical framework explicitly recognises that patient-level costs and outcomes are nested within countries. Using real life data, we illustrate how the proposed methods can be applied to obtain (a) more appropriate estimates of overall cost-effectiveness and associated measure of sampling uncertainty compared to standard CEA; and (b) country-specific cost-effectiveness estimates which can be used to assess the between-location variability of the study results, while controlling for differences in country-specific and patient-specific characteristics. It is demonstrated that results from standard CEA using IPD from multinational trials display a large degree of variability across the 17 countries included in the analysis, producing potentially misleading results. In contrast, ‘shrinkage estimates’ obtained from the modelling approach proposed here facilitate the appropriate quantification of country-specific cost-effectiveness estimates, while weighting the results based on the level of information available within each country. We suggest that the methods presented here represent a general framework for the analysis of economic data collected from different locations. PMID:17641141
Comparison of risk estimates using life-table methods.
Sullivan, R E; Weng, P S
1987-08-01
Risk estimates promulgated by various radiation protection authorities in recent years have become increasingly more complex. Early "integral" estimates in the form of health effects per 0.01 person-Gy (per person-rad) or per 10(4) person-Gy (per 10(6) person-rad) have tended to be replaced by "differential" estimates which are age- and sex-dependent and specify both minimum induction (latency) and duration of risk expression (plateau) periods. These latter types of risk estimate must be used in conjunction with a life table in order to reduce them to integral form. In this paper, the life table has been used to effect a comparison of the organ and tissue risk estimates derived in several recent reports. In addition, a brief review of life-table methodology is presented and some features of the models used in deriving differential coefficients are discussed. While the great number of permutations possible with dose-response models, detailed risk estimates and proposed projection models precludes any unique result, the reduced integral coefficients are required to conform to the linear, absolute-risk model recommended for use with the integral risk estimates reviewed.
Using population models to evaluate management alternatives for Gulf Striped Bass
Aspinwall, Alexander P.; Irwin, Elise R.; Lloyd, M. Clint
2017-01-01
Interstate management of Gulf Striped Bass Morone saxatilis has involved a thirty-year cooperative effort involving Federal and State agencies in Georgia, Florida and Alabama (Apalachicola-Chattahoochee-Flint Gulf Striped Bass Technical Committee). The Committee has recently focused on developing an adaptive framework for conserving and restoring Gulf Striped Bass in the Apalachicola, Chattahoochee, and Flint River (ACF) system. To evaluate the consequences and tradeoffs among management activities, population models were used to inform management decisions. Stochastic matrix models were constructed with varying recruitment and stocking rates to simulate effects of management alternatives on Gulf Striped Bass population objectives. An age-classified matrix model that incorporated stock fecundity estimates and survival estimates was used to project population growth rate. In addition, combinations of management alternatives (stocking rates, Hydrilla control, harvest regulations) were evaluated with respect to how they influenced Gulf Striped Bass population growth. Annual survival and mortality rates were estimated from catch-curve analysis, while fecundity was estimated and predicted using a linear least squares regression analysis of fish length versus egg number from hatchery brood fish data. Stocking rates and stocked-fish survival rates were estimated from census data. Results indicated that management alternatives could be an effective approach to increasing the Gulf Striped Bass population. Population abundance was greatest under maximum stocking effort, maximum Hydrilla control and a moratorium. Conversely, population abundance was lowest under no stocking, no Hydrilla control and the current harvest regulation. Stocking rates proved to be an effective management strategy; however, low survival estimates of stocked fish (1%) limited the potential for population growth. Hydrilla control increased the survival rate of stocked fish and provided higher estimates of population abundances than maximizing the stocking rate. A change in the current harvest regulation (50% harvest regulation) was not an effective alternative to increasing the Gulf Striped Bass population size. Applying a moratorium to the Gulf Striped Bass fishery increased survival rates from 50% to 74% and resulted in the largest population growth of the individual management alternatives. These results could be used by the Committee to inform management decisions for other populations of Striped Bass in the Gulf Region.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
Estimating the effective spatial resolution of an AVHRR time series
Meyer, D.J.
1996-01-01
A method is proposed to estimate the spatial degradation of geometrically rectified AVHRR data resulting from misregistration and off-nadir viewing, and to infer the cumulative effect of these degradations over time. Misregistrations are measured using high resolution imagery as a geometric reference, and pixel sizes are computed directly from satellite zenith angles. The influence or neighbouring features on a nominal 1 km by 1 km pixel over a given site is estimated from the above information, and expressed as a spatial distribution whose spatial frequency response is used to define an effective field-of-view (EFOV) for a time series. In a demonstration of the technique applied to images from the Conterminous U.S. AVHRR data set, an EFOV of 3·1km in the east-west dimension and 19 km in the north-south dimension was estimated for a time series accumulated over a grasslands test site.
Austin, Peter C; Schuster, Tibor
2016-10-01
Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.
Molar axis estimation from computed tomography images.
Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li
2016-08-01
Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-04-01
Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.
Two new methods to fit models for network meta-analysis with random inconsistency effects.
Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang
2016-07-28
Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit models for network meta-analysis with random inconsistency effects. They are easily implemented using the accompanying R code in the Additional file 1. Using these estimation methods, the extent of inconsistency can be assessed and reported.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
2013-01-01
Demographic estimates of population at risk often underpin epidemiologic research and public health surveillance efforts. In spite of their central importance to epidemiology and public-health practice, little previous attention has been paid to evaluating the magnitude of errors associated with such estimates or the sensitivity of epidemiologic statistics to these effects. In spite of the well-known observation that accuracy in demographic estimates declines as the size of the population to be estimated decreases, demographers continue to face pressure to produce estimates for increasingly fine-grained population characteristics at ever-smaller geographic scales. Unfortunately, little guidance on the magnitude of errors that can be expected in such estimates is currently available in the literature and available for consideration in small-area epidemiology. This paper attempts to fill this current gap by producing a Vintage 2010 set of single-year-of-age estimates for census tracts, then evaluating their accuracy and precision in light of the results of the 2010 Census. These estimates are produced and evaluated for 499 census tracts in New Mexico for single-years of age from 0 to 21 and for each sex individually. The error distributions associated with these estimates are characterized statistically using non-parametric statistics including the median and 2.5th and 97.5th percentiles. The impact of these errors are considered through simulations in which observed and estimated 2010 population counts are used as alternative denominators and simulated event counts are used to compute a realistic range fo prevalence values. The implications of the results of this study for small-area epidemiologic research in cancer and environmental health are considered. PMID:24359344
Yue, Lishengsa; Abdel-Aty, Mohamed; Wu, Yina; Wang, Ling
2018-08-01
The Connected Vehicle (CV) technologies together with other Driving Assistance (DA) technologies are believed to have great effects on traffic operation and safety, and they are expected to impact the future of our cities. However, few research has estimated the exact safety benefits when all vehicles are equipped with these technologies. This paper seeks to fill the gap by using a general crash avoidance effectiveness framework for major CV&DA technologies to make a comprehensive crash reduction estimation. Twenty technologies that were tested in recent studies are summarized and sensitivity analysis is used for estimating their total crash avoidance effectiveness. The results show that crash avoidance effectiveness of CV&DA technology is significantly affected by the vehicle type and the safety estimation methodology. A 70% crash avoidance rate seems to be the highest effectiveness for the CV&DA technologies operating in the real-world environment. Based on the 2005-2008 U.S. GES Crash Records, this research found that the CV&DA technologies could lead to the reduction of light vehicles' crashes and heavy trucks' crashes by at least 32.99% and 40.88%, respectively. The rear-end crashes for both light vehicles and heavy trucks have the most expected crash benefits from the technologies. The paper also studies the effectiveness of Forward Collision Warning technology (FCW) under fog conditions, and the results show that FCW could reduce 35% of the near-crash events under fog conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Griffiths, Ulla Kou; Clark, Andrew; Hajjeh, Rana
2017-01-01
Objectives To estimate the cost-effectiveness of Haemophilus influenzae type b (Hib) conjugate vaccine in low-and middle-income countries and identify the model variables, which are most important for the result. Study design A static decision tree model was developed to predict incremental costs and health impacts. Estimates were generated for 4 country groups: countries eligible for funding by the GAVI Alliance in Africa and Asia, lower middle-income countries, and upper middle-income countries. Values, including disease incidence, case fatality rates, and treatment costs, were based on international country estimates and the scientific literature. Results From the societal perspective, it is estimated that the probability of Hib conjugate vaccine cost saving is 34%–53% in Global Alliance for Vaccines and Immunization eligible African and Asian countries, respectively. In middle-income countries, costs per discounted disability adjusted life year averted are between US$37 and US$733. Variation in vaccine prices and risks of meningitis sequelae and mortality explain most of the difference in results. For all country groups, disease incidence cause the largest part of the uncertainty in the result. Conclusions Hib conjugate vaccine is cost saving or highly cost-effective in low- and middle-income settings. This conclusion is especially influenced by the recent decline in Hib conjugate vaccine prices and new data revealing the high costs of lost productivity associated with meningitis sequelae. PMID:23773595
Zhang, P; Husten, C; Giovino, G
2000-01-01
OBJECTIVES: This study evaluated the direct effect of the tobacco price support program on domestic cigarette consumption. METHODS: We developed an economic model of demand and supply of US tobacco to estimate how much the price support program increases the price of tobacco. We calculated the resultant increase in cigarette prices from the change in the tobacco price and the quantity of domestic tobacco contained in US cigarettes. We then assessed the reduction in cigarette consumption attributable to the price support program by applying the estimated increase in the cigarette price to assumed price elasticities of demand for cigarettes. RESULTS: We estimated that the tobacco price support program increased the price of tobacco leaf by $0.36 per pound. This higher tobacco price translates to a $0.01 increase in the price of a pack of cigarettes and an estimated 0.21% reduction in cigarette consumption. CONCLUSION: Because the tobacco price support program increases the price of cigarettes minimally, its potential health benefit is likely to be small. The adverse political effect of the tobacco program might substantially outweigh the potential direct benefit of the program on cigarette consumption. PMID:10800423
Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys
DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A
2014-01-01
ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598
Generalizing observational study results: applying propensity score methods to complex surveys.
Dugoff, Eva H; Schuler, Megan; Stuart, Elizabeth A
2014-02-01
To provide a tutorial for using propensity score methods with complex survey data. Simulated data and the 2008 Medical Expenditure Panel Survey. Using simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. In general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. Propensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher's goal. © Health Research and Educational Trust.
Thin-Layering Effect On Estimating Seismic Attenuation In Methane Hydrate-Bearing Sediments
NASA Astrophysics Data System (ADS)
Lee, K.; Matsushima, J.
2012-12-01
Seismic attenuation is one of the important parameters that provide information concerning both the detection and quantitative assessment of gas-hydrates. We estimated seismic attenuation (1/Q) from surface seismic data acquired at Nankai Trough in Japan. We adapt the Q-versus offset (QVO) method to calculate robust and continuous interval attenuations from CMP gathers. We could observe high attenuation in methane hydrate bearing sediments over the BSR region. However some negative 1/Q values are also shown. This means that the amplitude of high frequency components is increasing with depth. Such results may be due to tuning effect. Here, we carried out numerical test to see how thin-layering effect influences on seismic attenuation results. The results showed that tuning considerably influences the attenuation results, and causes the lower 1/Q values (lower attenuation) and negative 1/Q values.
Chumney, Elinor C G; Biddle, Andrea K; Simpson, Kit N; Weinberger, Morris; Magruder, Kathryn M; Zelman, William N
2004-01-01
As cost-effectiveness analyses (CEAs) are increasingly used to inform policy decisions, there is a need for more information on how different cost determination methods affect cost estimates and the degree to which the resulting cost-effectiveness ratios (CERs) may be affected. The lack of specificity of diagnosis-related groups (DRGs) could mean that they are ill-suited for costing applications in CEAs. Yet, the implications of using International Classification of Diseases-9th edition (ICD-9) codes or a form of disease-specific risk group stratification instead of DRGs has yet to be clearly documented. To demonstrate the implications of different disease coding mechanisms on costs and the magnitude of error that could be introduced in head-to-head comparisons of resulting CERs. We based our analyses on a previously published Markov model for HIV/AIDS therapies. We used the Healthcare Cost and Utilisation Project Nationwide Inpatient Sample (HCUP-NIS) data release 6, which contains all-payer data on hospital inpatient stays from selected states. We added costs for the mean number of hospitalisations, derived from analyses based on either DRG or ICD-9 codes or risk group stratification cost weights, to the standard outpatient and prescription drug costs to yield an estimate of total charges for each AIDS-defining illness (ADI). Finally, we estimated the Markov model three times with the appropriate ADI cost weights to obtain CERs specific to the use of either DRG or ICD-9 codes or risk group. Contrary to expectations, we found that the choice of coding/grouping assumptions that are disease-specific by either DRG codes, ICD-9 codes or risk group resulted in very similar CER estimates for highly active antiretroviral therapy. The large variations in the specific ADI cost weights across the three different coding approaches was especially interesting. However, because no one approach produced consistently higher estimates than the others, the Markov model's weighted cost per event and resulting CERs were remarkably close in value to one another. Although DRG codes are based on broader categories and contain less information than ICD-9 codes, in practice the choice of whether to use DRGs or ICD-9 codes may have little effect on the CEA results in heterogeneous conditions such as HIV/AIDS.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
Stochastic Gabor reflectivity and acoustic impedance inversion
NASA Astrophysics Data System (ADS)
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.
Jacobson, Eiren K; Forney, Karin A; Barlow, Jay
2017-01-01
Passive acoustic monitoring is a promising approach for monitoring long-term trends in harbor porpoise (Phocoena phocoena) abundance. Before passive acoustic monitoring can be implemented to estimate harbor porpoise abundance, information about the detectability of harbor porpoise is needed to convert recorded numbers of echolocation clicks to harbor porpoise densities. In the present study, paired data from a grid of nine passive acoustic click detectors (C-PODs, Chelonia Ltd., United Kingdom) and three days of simultaneous aerial line-transect visual surveys were collected over a 370 km 2 study area. The focus of the study was estimating the effective detection area of the passive acoustic sensors, which was defined as the product of the sound production rate of individual animals and the area within which those sounds are detected by the passive acoustic sensors. Visually estimated porpoise densities were used as informative priors in a Bayesian model to solve for the effective detection area for individual harbor porpoises. This model-based approach resulted in a posterior distribution of the effective detection area of individual harbor porpoises consistent with previously published values. This technique is a viable alternative for estimating the effective detection area of passive acoustic sensors when other experimental approaches are not feasible.
Precision and relative effectiveness of a purse seine for sampling age-0 river herring in lakes
Devine, Matthew T.; Roy, Allison; Whiteley, Andrew R.; Gahagan, Benjamin I.; Armstrong, Michael P.; Jordaan, Adrian
2018-01-01
Stock assessments for anadromous river herring, collectively Alewife Alosa pseudoharengus and Blueback Herring A. aestivalis, lack adequate demographic information, particularly with respect to early life stages. Although sampling adult river herring is increasingly common throughout their range, currently no standardized, field‐based, analytical methods exist for estimating juvenile abundance in freshwater lakes. The objective of this research was to evaluate the relative effectiveness and sampling precision of a purse seine for estimating densities of age‐0 river herring in freshwater lakes. We used a purse seine to sample age‐0 river herring in June–September 2015 and June–July 2016 in 16 coastal freshwater lakes in the northeastern USA. Sampling effort varied from two seine hauls to more than 50 seine hauls per lake. Catch rates were highest in June and July, and sampling precision was maximized in July. Sampling at night (versus day) in open water (versus littoral areas) was most effective for capturing newly hatched larvae and juveniles up to ca. 100 mm TL. Bootstrap simulation results indicated that sampling precision of CPUE estimates increased with sampling effort, and there was a clear threshold beyond which increased effort resulted in negligible increases in precision. The effort required to produce precise CPUE estimates, as determined by the CV, was dependent on lake size; river herring densities could be estimated with up to 10 purse‐seine hauls (one‐two nights) in a small lake (<50 ha) and 15–20 hauls (two‐three nights) in a large lake (>50 ha). Fish collection techniques using a purse seine as described in this paper are likely to be effective for estimating recruit abundance of river herring in freshwater lakes across their range.
Friesen, Melissa C; Bassig, Bryan A; Vermeulen, Roel; Shu, Xiao-Ou; Purdue, Mark P; Stewart, Patricia A; Xiang, Yong-Bing; Chow, Wong-Ho; Ji, Bu-Tian; Yang, Gong; Linet, Martha S; Hu, Wei; Gao, Yu-Tang; Zheng, Wei; Rothman, Nathaniel; Lan, Qing
2017-01-01
To provide insight into the contributions of exposure measurements to job exposure matrices (JEMs), we examined the robustness of an association between occupational benzene exposure and non-Hodgkin lymphoma (NHL) to varying exposure assessment methods. NHL risk was examined in a prospective population-based cohort of 73087 women in Shanghai. A mixed-effects model that combined a benzene JEM with >60000 short-term, area benzene inspection measurements was used to derive two sets of measurement-based benzene estimates: 'job/industry-specific' estimates (our presumed best approach) were derived from the model's fixed effects (year, JEM intensity rating) and random effects (occupation, industry); 'calibrated JEM' estimates were derived using only the fixed effects. 'Uncalibrated JEM' (using the ordinal JEM ratings) and exposure duration estimates were also calculated. Cumulative exposure for each subject was calculated for each approach based on varying exposure definitions defined using the JEM's probability ratings. We examined the agreement between the cumulative metrics and evaluated changes in the benzene-NHL associations. For our primary exposure definition, the job/industry-specific estimates were moderately to highly correlated with all other approaches (Pearson correlation 0.61-0.89; Spearman correlation > 0.99). All these metrics resulted in statistically significant exposure-response associations for NHL, with negligible gain in model fit from using measurement-based estimates. Using more sensitive or specific exposure definitions resulted in elevated but non-significant associations. The robust associations observed here with varying benzene assessment methods provide support for a benzene-NHL association. While incorporating exposure measurements did not improve model fit, the measurements allowed us to derive quantitative exposure-response curves. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
NASA Astrophysics Data System (ADS)
Quiroga, Sonia; Suárez, Cristina
2016-06-01
This paper examines the effects of climate change and drought on agricultural incomes in Spanish rural areas. Present research has focused on the effects of these extreme climatological events through response functions, considering effects on crop productivity and average incomes. Among the impacts of droughts, we focused on potential effects on income distribution. The study of the effects on abnormally dry periods is therefore needed in order to perform an analysis of diverse social aspects in the long term. We estimate crop production functions for a range of Mediterranean crops in Spain and we use a measure of the decomposition of inequality to estimate the impact of climate change and drought on yield disparities. Certain adaptation measures may require a better understanding of risks by the public to achieve general acceptance. We provide empirical estimations for the marginal effects of the two impacts considered: farms' average income and income distribution. Our estimates consider crop production response to both biophysical and socio-economic aspects to analyse long-term implications on competitiveness and disparities. As for the results, we find disparities in the adaptation priorities depending on the crop and the region analysed.
Cost Effectiveness of Influenza Vaccine Choices in Children Aged 2–8 Years in the U.S.
Smith, Kenneth J.; Raviotta, Jonathan M.; DePasse, Jay V.; Brown, Shawn T.; Shim, Eunha; Nowalk, Mary Patricia; Zimmerman, Richard K.
2015-01-01
Introduction Prior evidence found live attenuated influenza vaccine (LAIV) more effective than inactivated influenza vaccine (IIV) in children aged 2–8 years, leading CDC in 2014 to prefer LAIV use in this group. However, since 2013, LAIV has not proven superior, leading CDC in 2015 to rescind their LAIV preference statement. Here, the cost effectiveness of preferred LAIV use compared with IIV in children aged 2–8 years is estimated. Methods A Markov model estimated vaccination strategy cost effectiveness in terms of cost per quality-adjusted life year gained. Base case assumptions were: equal vaccine uptake, IIV use when LAIV was not indicated (in 11.7% of the cohort), and no indirect vaccination effects. Sensitivity analyses included estimates of indirect effects from both equation- and agent-based models. Analyses were performed in 2014–2015. Results Using prior effectiveness data in children aged 2–8 years (LAIV=83%, IIV=64%), preferred LAIV use was less costly and more effective than IIV (dominant), with results sensitive only to LAIV and IIV effectiveness variation. Using 2014–2015 U.S. effectiveness data (LAIV=0%, IIV=15%), IIV was dominant. In two-way sensitivity analyses, LAIV use was cost saving over the entire range of IIV effectiveness (0%–81%) when absolute LAIV effectiveness was >7.1% higher than IIV, but never cost saving when absolute LAIV effectiveness was <3.5% higher than IIV. Conclusions Results support CDC’s decision to no longer prefer LAIV use and provide guidance on effectiveness differences between influenza vaccines that might lead to preferential LAIV recommendation for children aged 2–8 years. PMID:26868283
DOT National Transportation Integrated Search
1998-02-01
A general methodology is developed for estimating the change in the number of injuries and fatalities expected as a result of a change in vehicle crashworthiness design. It is assumed that crash tests have provided information on dummy response measu...
NASA Astrophysics Data System (ADS)
Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad
2015-10-01
In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Russell, P. B.
2000-01-01
We estimate solar radiative flux changes due to aerosols over the mid-latitude North Atlantic by combining optical depths from AVHRR measurements with aerosol properties from the recent TARFOX program. Results show that, over the ocean the aerosol decreases the net radiative flux at the tropopause and therefore has a cooling effect. Cloud-free, 24-hour average flux changes range from -9 W/sq m near the eastern US coast in summer to -1 W/sq m in the mid-Atlantic during winter. Cloud-free North Atlantic regional averages range from -5.1 W/sq m in summer to -1.7 W/sq m in winter, with an annual average of -3.5 W/sq m. Cloud effects estimated from ISCCP data, reduce the regional annual average to -0.8 W/sq m. All values are for the moderately absorbing TARFOX aerosol (omega(0.55 microns) = 0.9); values for a nonabsorbing aerosol are approx. 30% more negative. We compare our results to a variety of other calculations of aerosol radiative effects.
Estimating the coverage of mental health programmes: a systematic review
De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram
2014-01-01
Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874
Do diabetes group visits lead to lower medical care charges?
Clancy, Dawn E; Dismuke, Clara E; Magruder, Kathryn Marley; Simpson, Kit N; Bradford, David
2008-01-01
To evaluate whether attending diabetes group visits (GVs) leads to lower medical care charges for inadequately insured patients with type 2 diabetes mellitus (DM). Randomized controlled clinical trial. Data were abstracted from financial records for 186 patients with uncontrolled type 2 DM randomized to receive care in GVs or usual care for 12 months. Mann-Whitney tests for differences of means for outpatient visits (primary and specialty care), emergency department (ED) visits, and inpatient stays were performed. Separate charge models were developed for primary and specialty outpatient visits. Because GV adherence is potentially dependent on unobserved patient characteristics, treatment effect models of outpatient charges and specialty care visits were estimated using maximum likelihood methods. Mann-Whitney test results indicated that GV patients had reduced ED and total charges but more outpatient charges than usual care patients. Ordinary least squares estimations confirmed that GVs increased outpatient visit charges; however, controlling for endogeneity by estimating a treatment effect model of outpatient visit charges showed that GVs statistically significantly reduced outpatient charges (P <.001). Estimation of a separate treatment effect model of specialty care visits confirmed that GV effects on outpatient visit charges occurred via a reduction in specialty care visits. After controlling for endogeneity via estimation of a treatment effect model, GVs statistically significantly reduced outpatient visit charges. Estimation of a separate treatment effect model of specialty care visits indicated that GVs likely substitute for more expensive specialty care visits.
The effect of e-cigarette indoor vaping restrictions on adult prenatal smoking and birth outcomes.
Cooper, Michael T; Pesko, Michael F
2017-12-01
We estimate the effect of county-level e-cigarette indoor vaping restrictions on adult prenatal smoking and birth outcomes using United States birth record data for 7 million pregnant women living in places already comprehensively banning the indoor use of traditional cigarettes. We use both cross-sectional and panel data to estimate our difference-in-difference models. Our panel model results suggest that adoption of a comprehensive indoor vaping restriction increased prenatal smoking by 2.0 percentage points, which is double the estimate obtained from a cross-sectional model. We also document heterogeneity in effect sizes along lines of age, education, and type of insurance. Copyright © 2017 Elsevier B.V. All rights reserved.
Using Internet search engines to estimate word frequency.
Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E
2002-05-01
The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.
An error-based micro-sensor capture system for real-time motion estimation
NASA Astrophysics Data System (ADS)
Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li
2017-10-01
A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).
Effect of retransmission and retrodiction on estimation and fusion in long-haul sensor networks
Liu, Qiang; Wang, Xin; Rao, Nageswara S. V.; ...
2016-01-01
In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as target tracking. In this work, we study the scenario where sensors take measurements of one or more dynamic targets and send state estimates of the targets to a fusion center via satellite links. The severe loss and delay inherent over the satellite channels reduce the number of estimates successfully arriving at the fusion center, thereby limiting the potential fusion gain and resulting in suboptimal accuracy performance of the fused estimates. In addition, the errors in target-sensor data association can alsomore » degrade the estimation performance. To mitigate the effect of imperfect communications on state estimation and fusion, we consider retransmission and retrodiction. The system adopts certain retransmission-based transport protocols so that lost messages can be recovered over time. Besides, retrodiction/smoothing techniques are applied so that the chances of incurring excess delay due to retransmission are greatly reduced. We analyze the extent to which retransmission and retrodiction can improve the performance of delay-sensitive target tracking tasks under variable communication loss and delay conditions. Lastly, simulation results of a ballistic target tracking application are shown in the end to demonstrate the validity of our analysis.« less
Effects of 16S rDNA sampling on estimates of the number of endosymbiont lineages in sucking lice
Burleigh, J. Gordon; Light, Jessica E.; Reed, David L.
2016-01-01
Phylogenetic trees can reveal the origins of endosymbiotic lineages of bacteria and detect patterns of co-evolution with their hosts. Although taxon sampling can greatly affect phylogenetic and co-evolutionary inference, most hypotheses of endosymbiont relationships are based on few available bacterial sequences. Here we examined how different sampling strategies of Gammaproteobacteria sequences affect estimates of the number of endosymbiont lineages in parasitic sucking lice (Insecta: Phthirapatera: Anoplura). We estimated the number of louse endosymbiont lineages using both newly obtained and previously sequenced 16S rDNA bacterial sequences and more than 42,000 16S rDNA sequences from other Gammaproteobacteria. We also performed parametric and nonparametric bootstrapping experiments to examine the effects of phylogenetic error and uncertainty on these estimates. Sampling of 16S rDNA sequences affects the estimates of endosymbiont diversity in sucking lice until we reach a threshold of genetic diversity, the size of which depends on the sampling strategy. Sampling by maximizing the diversity of 16S rDNA sequences is more efficient than randomly sampling available 16S rDNA sequences. Although simulation results validate estimates of multiple endosymbiont lineages in sucking lice, the bootstrap results suggest that the precise number of endosymbiont origins is still uncertain. PMID:27547523
Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex
2017-03-01
The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.
Inadequacy of internal covariance estimation for super-sample covariance
NASA Astrophysics Data System (ADS)
Lacasa, Fabien; Kunz, Martin
2017-08-01
We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.
A critique of recent economic evaluations of community water fluoridation
Ko, Lee; Thiessen, Kathleen M
2015-01-01
Background: Although community water fluoridation (CWF) results in a range of potential contaminant exposures, little attention has been given to many of the possible impacts. A central argument for CWF is its cost-effectiveness. The U.S. Government states that $1 spent on CWF saves $38 in dental treatment costs. Objective: To examine the reported cost-effectiveness of CWF. Methods: Methods and underlying data from the primary U.S. economic evaluation of CWF are analyzed and corrected calculations are described. Other recent economic evaluations are also examined. Results: Recent economic evaluations of CWF contain defective estimations of both costs and benefits. Incorrect handling of dental treatment costs and flawed estimates of effectiveness lead to overestimated benefits. The real-world costs to water treatment plants and communities are not reflected. Conclusions: Minimal correction reduced the savings to $3 per person per year (PPPY) for a best-case scenario, but this savings is eliminated by the estimated cost of treating dental fluorosis. PMID:25471729
Vertical variation of ice particle size in convective cloud tops.
van Diedenhoven, Bastiaan; Fridlind, Ann M; Cairns, Brian; Ackerman, Andrew S; Yorks, John E
2016-05-16
A novel technique is used to estimate derivatives of ice effective radius with respect to height near convective cloud tops ( dr e / dz ) from airborne shortwave reflectance measurements and lidar. Values of dr e / dz are about -6 μ m/km for cloud tops below the homogeneous freezing level, increasing to near 0 μ m/km above the estimated level of neutral buoyancy. Retrieved dr e / dz compares well with previously documented remote sensing and in situ estimates. Effective radii decrease with increasing cloud top height, while cloud top extinction increases. This is consistent with weaker size sorting in high, dense cloud tops above the level of neutral buoyancy where fewer large particles are present, and with stronger size sorting in lower cloud tops that are less dense. The results also confirm that cloud-top trends of effective radius can generally be used as surrogates for trends with height within convective cloud tops. These results provide valuable observational targets for model evaluation.
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
Gold, Heather Taffet; Sorbero, Melony E. S.; Griggs, Jennifer J.; Do, Huong T.; Dick, Andrew W.
2013-01-01
Analysis of observational cohort data is subject to bias from unobservable risk selection. We compared econometric models and treatment effectiveness estimates using the linked Surveillance, Epidemiology, and End Results (SEER)-Medicare claims data for women diagnosed with ductal carcinoma in situ. Treatment effectiveness estimates for mastectomy and breast conserving surgery (BCS) with or without radiotherapy were compared using three different models: simultaneous-equations model, discrete-time survival model with unobserved heterogeneity (frailty), and proportional hazards model. Overall trends in disease-free survival (DFS), or time to first subsequent breast event, by treatment are similar regardless of the model, with mastectomy yielding the highest DFS over 8 years of follow-up, followed by BCS with radiotherapy, and then BCS alone. Absolute rates and direction of bias varied substantially by treatment strategy. DFS was underestimated by single-equation and frailty models compared to the simultaneous-equations model and RCT results for BCS with RT and overestimated for BCS alone. PMID:21602195
Vertical Variation of Ice Particle Size in Convective Cloud Tops
NASA Technical Reports Server (NTRS)
Van Diedenhoven, Bastiaan; Fridlind, Ann M.; Cairns, Brian; Ackerman, Andrew S.; Yorks, John E.
2016-01-01
A novel technique is used to estimate derivatives of ice effective radius with respect to height near convective cloud tops (dr(sub e)/dz) from airborne shortwave reflectance measurements and lidar. Values of dr(sub e)/dz are about -6 micrometer/km for cloud tops below the homogeneous freezing level, increasing to near 0 micrometer/km above the estimated level of neutral buoyancy. Retrieved dr(sub e)/dz compares well with previously documented remote sensing and in situ estimates. Effective radii decrease with increasing cloud top height, while cloud top extinction increases. This is consistent with weaker size sorting in high, dense cloud tops above the level of neutral buoyancy where fewer large particles are present and with stronger size sorting in lower cloud tops that are less dense. The results also confirm that cloud top trends of effective radius can generally be used as surrogates for trends with height within convective cloud tops. These results provide valuable observational targets for model evaluation.
NASA Technical Reports Server (NTRS)
Fauchez, T.; Platnick, S.; Meyer, K.; Sourdeval, O.; Cornet, C.; Zhang, Z.; Szczap, F.
2016-01-01
This study presents preliminary results on the effect of cirrus heterogeneities on top-of-atmosphere (TOA) simulated radiances or reflectances for MODIS channels centered at 0.86, 2.21, 8.56, 11.01 and 12.03 micrometers , and on cloud optical properties retrieved with a research-level optimal estimation method (OEM). Synthetic cirrus cloud fields are generated using a 3D cloud generator (3DCLOUD) and radiances/reflectances are simulated using a 3D radiative transfer code (3DMCPOL). We find significant differences between the heterogeneity effects on either visible and near-infrared (VNIR) or thermal infrared (TIR) radiances. However, when both wavelength ranges are combined, heterogeneity effects are dominated by the VNIR horizontal radiative transport effect. As a result, small optical thicknesses are overestimated and large ones are underestimated. Retrieved effective diameter are found to be slightly affected, contrarily to retrievals using TIR channels only.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile
2017-03-01
Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
2014-01-01
Background Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. Results We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Conclusions Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality. PMID:24980787
Schmucker, Christine M; Blümle, Anette; Schell, Lisa K; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J
2017-01-01
A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04-1.28 and 1.34, 95% CI 1.09-1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Although we may anticipate that systematic reviews and meta-analyses not including unpublished or grey literature study results are likely to overestimate the treatment effects, current empirical research shows that this is only the case in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future research is needed to identify which reviews may benefit most from including unpublished or grey data.
Blümle, Anette; Schell, Lisa K.; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J.
2017-01-01
Background A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Methods and findings Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04–1.28 and 1.34, 95% CI 1.09–1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Conclusions Although we may anticipate that systematic reviews and meta-analyses not including unpublished or grey literature study results are likely to overestimate the treatment effects, current empirical research shows that this is only the case in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future research is needed to identify which reviews may benefit most from including unpublished or grey data. PMID:28441452
Latimer, Nicholas R; Abrams, Keith R; Amonkar, Mayur M; Stapelkamp, Ceilidh; Swann, R Suzanne
2015-07-01
Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48-1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, "treatment group" (assumes treatment effect could continue until death) and "on-treatment observed" (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE "treatment group" and "on-treatment observed" analyses performed similarly well. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching-a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Treatment switching is common in oncology trials, and the implications of this for the interpretation of the clinical effectiveness and cost-effectiveness of the novel treatment are important to consider. If patients who switch treatments benefit from the experimental treatment and a standard intention-to-treat analysis is conducted, the overall survival advantage associated with the new treatment could be underestimated. The present study applied established statistical methods to adjust for treatment switching in a trial that compared dabrafenib and dacarbazine for metastatic melanoma. The results showed that this led to a substantially increased estimate of the overall survival treatment effect associated with dabrafenib. ©AlphaMed Press.
A comment on "Novel scavenger removal trials increase wind turbine-caused avian fatality estimates"
Huso, Manuela M.P.; Erickson, Wallace P.
2013-01-01
In a recent paper, Smallwood et al. (2010) conducted a study to compare their “novel” approach to conducting carcass removal trials with what they term the “conventional” approach and to evaluate the effects of the different methods on estimated avian fatality at a wind power facility in California. A quick glance at Table 3 that succinctly summarizes their results and provides estimated fatality rates and 80% confidence intervals calculated using the 2 methods reveals a surprising result. The confidence intervals of all of their estimates and most of the conventional estimates extend below 0. These results imply that wind turbines may have the capacity to create live birds. But a more likely interpretation is that a serious error occurred in the calculation of either the average fatality rate or its standard error or both. Further evaluation of their methods reveals that the scientific basis for concluding that “many estimates of scavenger removal rates prior to [their] study were likely biased low due to scavenger swamping” and “previously reported estimates of avian fatality rates … should be adjusted upwards” was not evident in their analysis and results. Their comparison to conventional approaches was not applicable, their statistical models were questionable, and the conclusions they drew were unsupported.
Crop identification and area estimation over large geographic areas using LANDSAT MSS data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator)
1977-01-01
The author has identified the following significant results. LANDSAT MSS data was adequate to accurately identify wheat in Kansas; corn and soybean estimates in Indiana were less accurate. Computer-aided analysis techniques were effectively used to extract crop identification information from LANDSAT data. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels. Training statistics were successfully extended from one county to other counties having similar crops and soils if the training areas sampled the total variation of the area to be classified.
Microdetermination of calcium and magnesium in biological materials
Bowden, C. H.; Patston, Valerie J.
1963-01-01
The use of the dye calcon (1-(2 hydroxy-1-naphthylazo)-2-naphthol-4 sulphonic acid) for the estimation of calcium using E.D.T.A. and a commercial photoelectric titrimeter is described. The interfering effects of magnesium and phosphate have been overcome. The method has been extended to estimations on biological materials. Results on 55 sera show that the E.D.T.A./calcon method gave slightly lower results (—0·15 mg./100 ml. ± 0·029) than the oxalate precipitation method. Magnesium may also be estimated by incorporating the use of Eriochrome black T. PMID:14014590
Direct and indirect genetic and fine-scale location effects on breeding date in song sparrows.
Germain, Ryan R; Wolak, Matthew E; Arcese, Peter; Losdat, Sylvain; Reid, Jane M
2016-11-01
Quantifying direct and indirect genetic effects of interacting females and males on variation in jointly expressed life-history traits is central to predicting microevolutionary dynamics. However, accurately estimating sex-specific additive genetic variances in such traits remains difficult in wild populations, especially if related individuals inhabit similar fine-scale environments. Breeding date is a key life-history trait that responds to environmental phenology and mediates individual and population responses to environmental change. However, no studies have estimated female (direct) and male (indirect) additive genetic and inbreeding effects on breeding date, and estimated the cross-sex genetic correlation, while simultaneously accounting for fine-scale environmental effects of breeding locations, impeding prediction of microevolutionary dynamics. We fitted animal models to 38 years of song sparrow (Melospiza melodia) phenology and pedigree data to estimate sex-specific additive genetic variances in breeding date, and the cross-sex genetic correlation, thereby estimating the total additive genetic variance while simultaneously estimating sex-specific inbreeding depression. We further fitted three forms of spatial animal model to explicitly estimate variance in breeding date attributable to breeding location, overlap among breeding locations and spatial autocorrelation. We thereby quantified fine-scale location variances in breeding date and quantified the degree to which estimating such variances affected the estimated additive genetic variances. The non-spatial animal model estimated nonzero female and male additive genetic variances in breeding date (sex-specific heritabilities: 0·07 and 0·02, respectively) and a strong, positive cross-sex genetic correlation (0·99), creating substantial total additive genetic variance (0·18). Breeding date varied with female, but not male inbreeding coefficient, revealing direct, but not indirect, inbreeding depression. All three spatial animal models estimated small location variance in breeding date, but because relatedness and breeding location were virtually uncorrelated, modelling location variance did not alter the estimated additive genetic variances. Our results show that sex-specific additive genetic effects on breeding date can be strongly positively correlated, which would affect any predicted rates of microevolutionary change in response to sexually antagonistic or congruent selection. Further, we show that inbreeding effects on breeding date can also be sex specific and that genetic effects can exceed phenotypic variation stemming from fine-scale location-based variation within a wild population. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
Nelson, Suchitra; Albert, Jeffrey M.
2013-01-01
Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a non-zero total mediation effect increases as the correlation coefficient between two mediators increases, while power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. PMID:23650048
Wang, Wei; Nelson, Suchitra; Albert, Jeffrey M
2013-10-30
Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a nonzero total mediation effect increases as the correlation coefficient between two mediators increases, whereas power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. Copyright © 2013 John Wiley & Sons, Ltd.
Space Radiation Heart Disease Risk Estimates for Lunar and Mars Missions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Chappell, Lori; Kim, Myung-Hee
2010-01-01
The NASA Space Radiation Program performs research on the risks of late effects from space radiation for cancer, neurological disorders, cataracts, and heart disease. For mortality risks, an aggregate over all risks should be considered as well as projection of the life loss per radiation induced death. We report on a triple detriment life-table approach to combine cancer and heart disease risks. Epidemiology results show extensive heterogeneity between populations for distinct components of the overall heart disease risks including hypertension, ischaemic heart disease, stroke, and cerebrovascular diseases. We report on an update to our previous heart disease estimates for Heart disease (ICD9 390-429) and Stroke (ICD9 430-438), and other sub-groups using recent meta-analysis results for various exposed radiation cohorts to low LET radiation. Results for multiplicative and additive risk transfer models are considered using baseline rates for US males and female. Uncertainty analysis indicated heart mortality risks as low as zero, assuming a threshold dose for deterministic effects, and projections approaching one-third of the overall cancer risk. Medan life-loss per death estimates were significantly less than that of solid cancer and leukemias. Critical research questions to improve risks estimates for heart disease are distinctions in mechanisms at high doses (>2 Gy) and low to moderate doses (<2 Gy), and data and basic understanding of radiation doserate and quality effects, and individual sensitivity.
Network meta-analysis, electrical networks and graph theory.
Rücker, Gerta
2012-12-01
Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Three estimates of the association between linear growth failure and cognitive ability.
Cheung, Y B; Lam, K F
2009-09-01
To compare three estimators of association between growth stunting as measured by height-for-age Z-score and cognitive ability in children, and to examine the extent statistical adjustment for covariates is useful for removing confounding due to socio-economic status. Three estimators, namely random-effects, within- and between-cluster estimators, for panel data were used to estimate the association in a survey of 1105 pairs of siblings who were assessed for anthropometry and cognition. Furthermore, a 'combined' model was formulated to simultaneously provide the within- and between-cluster estimates. Random-effects and between-cluster estimators showed strong association between linear growth and cognitive ability, even after adjustment for a range of socio-economic variables. In contrast, the within-cluster estimator showed a much more modest association: For every increase of one Z-score in linear growth, cognitive ability increased by about 0.08 standard deviation (P < 0.001). The combined model verified that the between-cluster estimate was significantly larger than the within-cluster estimate (P = 0.004). Residual confounding by socio-economic situations may explain a substantial proportion of the observed association between linear growth and cognition in studies that attempt to control the confounding by means of multivariable regression analysis. The within-cluster estimator provides more convincing and modest results about the strength of association.
Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine
Herdtweck, Christian; Wallraven, Christian
2013-01-01
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types. PMID:24349073
NASA Astrophysics Data System (ADS)
Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang
2018-01-01
Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.
The effects of sample size on population genomic analyses--implications for the tests of neutrality.
Subramanian, Sankar
2016-02-20
One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).
Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang
2015-01-01
Background Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. Objective The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. Methods A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Results Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. Conclusion In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association. PMID:26422698
Wilson, Ander; Chiu, Yueh-Hsiu Mathilda; Hsu, Hsiao-Hsien Leon; Wright, Robert O; Wright, Rosalind J; Coull, Brent A
2017-07-01
Epidemiological research supports an association between maternal exposure to air pollution during pregnancy and adverse children's health outcomes. Advances in exposure assessment and statistics allow for estimation of both critical windows of vulnerability and exposure effect heterogeneity. Simultaneous estimation of windows of vulnerability and effect heterogeneity can be accomplished by fitting a distributed lag model (DLM) stratified by subgroup. However, this can provide an incomplete picture of how effects vary across subgroups because it does not allow for subgroups to have the same window but different within-window effects or to have different windows but the same within-window effect. Because the timing of some developmental processes are common across subpopulations of infants while for others the timing differs across subgroups, both scenarios are important to consider when evaluating health risks of prenatal exposures. We propose a new approach that partitions the DLM into a constrained functional predictor that estimates windows of vulnerability and a scalar effect representing the within-window effect directly. The proposed method allows for heterogeneity in only the window, only the within-window effect, or both. In a simulation study we show that a model assuming a shared component across groups results in lower bias and mean squared error for the estimated windows and effects when that component is in fact constant across groups. We apply the proposed method to estimate windows of vulnerability in the association between prenatal exposures to fine particulate matter and each of birth weight and asthma incidence, and estimate how these associations vary by sex and maternal obesity status in a Boston-area prospective pre-birth cohort study. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A novel approach to neutron dosimetry.
Balmer, Matthew J I; Gamage, Kelum A A; Taylor, Graeme C
2016-11-01
Having been overlooked for many years, research is now starting to take into account the directional distribution of neutron workplace fields. Existing neutron dosimetry instrumentation does not account for this directional distribution, resulting in conservative estimates of dose in neutron workplace fields (by around a factor of 2, although this is heavily dependent on the type of field). This conservatism could influence epidemiological studies on the health effects of radiation exposure. This paper reports on the development of an instrument which can estimate the effective dose of a neutron field, accounting for both the direction and the energy distribution. A 6 Li-loaded scintillator was used to perform neutron assays at a number of locations in a 20 × 20 × 17.5 cm 3 water phantom. The variation in thermal and fast neutron response to different energies and field directions was exploited. The modeled response of the instrument to various neutron fields was used to train an artificial neural network (ANN) to learn the effective dose and ambient dose equivalent of these fields. All experimental data published in this work were measured at the National Physical Laboratory (UK). Experimental results were obtained for a number of radionuclide source based neutron fields to test the performance of the system. The results of experimental neutron assays at 25 locations in a water phantom were fed into the trained ANN. A correlation between neutron counting rates in the phantom and neutron fluence rates was experimentally found to provide dose rate estimates. A radionuclide source behind shadow cone was used to create a more complex field in terms of energy and direction. For all fields, the resulting estimates of effective dose rate were within 45% or better of their calculated values, regardless of energy distribution or direction for measurement times greater than 25 min. This work presents a novel, real-time, approach to workplace neutron dosimetry. It is believed that in the research presented in this paper, for the first time, a single instrument has been able to estimate effective dose.
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
NASA Astrophysics Data System (ADS)
Novita, Mega; Nagoshi, Hikari; Sudo, Akiho; Ogasawara, Kazuyoshi
2018-01-01
In this study, we performed an investigation on α-Al2O3: V3+ material, or the so-called color change sapphire, based on first-principles calculations without referring to any experimental parameter. The molecular orbital (MO) structure was estimated by the one-electron MO calculations using the discrete variational-Xα (DV-Xα) method. Next, the absorption spectra were estimated by the many-electron calculations using the discrete variational multi-electron (DVME) method. The effect of lattice relaxation on the crystal structures was estimated based on the first-principles band structure calculations. We performed geometry optimizations on the pure α-Al2O3 and with the impurity V3+ ion using Cambridge Serial Total Energy Package (CASTEP) code. The effect of energy corrections such as configuration dependence correction and correlation correction was also investigated in detail. The results revealed that the structural change on the α-Al2O3: V3+ resulted from the geometry optimization improved the calculated absorption spectra. By a combination of both the lattice relaxation-effect and the energy correction-effect improve the agreement to the experiment fact.
NASA Astrophysics Data System (ADS)
Mahmud, M. H.; Nordin, A. J.; Saad, F. F. Ahmad; Fattah Azman, A. Z.
2014-11-01
This study aims to estimate the radiation effective dose resulting from whole body fluorine-18 flourodeoxyglucose Positron Emission Tomography (18F-FDG PET) scanning as compared to conservative Computed Tomography (CT) techniques in evaluating oncology patients. We reviewed 19 oncology patients who underwent 18F-FDG PET/CT at our centre for cancer staging. Internal and external doses were estimated using radioactivity of injected FDG and volume CT Dose Index (CTDIvol), respectively with employment of the published and modified dose coefficients. The median differences of dose among the conservative CT and PET protocols were determined using Kruskal Wallis test with p < 0.05 considered as significant. The median (interquartile range, IQR) effective doses of non-contrasted CT, contrasted CT and PET scanning protocols were 7.50 (9.35) mSv, 9.76 (3.67) mSv and 6.30 (1.20) mSv, respectively, resulting in the total dose of 21.46 (8.58) mSv. Statistically significant difference was observed in the median effective dose between the three protocols (p < 0.01). The effective doses of whole body 18F-FDG PET technique may be effective the lowest amongst the conventional CT imaging techniques.
An Improved Aerial Target Localization Method with a Single Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2017-01-01
This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956
Cost-effectiveness analysis of ibrutinib in patients with Waldenström macroglobulinemia in Italy.
Aiello, Andrea; D'Ausilio, Anna; Lo Muto, Roberta; Randon, Francesca; Laurenti, Luca
2017-01-01
Background and Objective: Ibrutinib has recently been approved in Europe for Waldenström Macroglobulinemia (WM) in symptomatic patients who have received at least one prior therapy, or in first-line treatment for patients unsuitable for chemo-immunotherapy. The aim of the study is to estimate the incremental cost-effectiveness ratio (ICER) of ibrutinib in relapse/refractory WM, compared with the Italian current therapeutic pathways (CTP). Methods: A Markov model was adapted for Italy considering the National Health System perspective. Input data from literature as well as global trials were used. The percentage use of therapies, and healthcare resources consumption were estimated according to expert panel advice. Drugs ex-factory prices and national tariffs were used for estimating costs. The model had a 15-year time horizon, with a 3.0% discount rate for both clinical and economic data. Deterministic and probabilistic sensitivity analyses were performed to test the results strength. Results: Ibrutinib resulted in increased Life Years Gained (LYGs) and increased costs compared to CTP, with an ICER of €52,698/LYG. Sensitivity analyses confirmed the results of the BaseCase. Specifically, in the probabilistic analysis, at a willingness to pay threshold of €60,000/LYG ibrutinib was cost-effective in 84% of simulations. Conclusions: Ibrutinib has demonstrated a positive cost-effectiveness profile in Italy.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
Correction of stream quality trends for the effects of laboratory measurement bias
Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.
1993-01-01
We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.
Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-03-16
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.
Magma ocean formation due to giant impacts
NASA Technical Reports Server (NTRS)
Tonks, W. B.; Melosh, H. J.
1993-01-01
The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.
Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-01-01
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Coley, Rebecca Yates; Browna, Elizabeth R.
2016-01-01
Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...
Estimating the avoided fuel-reatment costs of wildfire
Geoffrey H. Donovan; Thomas C. Brown
2008-01-01
Although the importance of wildfire to fire-adapted ecosystems is widely recognized, wildfire management has historically placed less emphasis on the beneficial effects of wildfire. We estimate the avoided fuel treatment cost for 10 ponderosa pine (Pinus ponderosa) stands on the Umatilla National Forest in the Pacific Northwest. Results show that...
Exploration of Heterogeneity in Distributed Research Network Drug Safety Analyses
ERIC Educational Resources Information Center
Hansen, Richard A.; Zeng, Peng; Ryan, Patrick; Gao, Juan; Sonawane, Kalyani; Teeter, Benjamin; Westrich, Kimberly; Dubois, Robert W.
2014-01-01
Distributed data networks representing large diverse populations are an expanding focus of drug safety research. However, interpreting results is difficult when treatment effect estimates vary across datasets (i.e., heterogeneity). In a previous study, risk estimates were generated for selected drugs and potential adverse outcomes. Analyses were…
College Quality and Early Adult Outcomes
ERIC Educational Resources Information Center
Long, Mark C.
2008-01-01
This paper estimates the effects of various college qualities on several early adult outcomes, using panel data from the National Education Longitudinal Study. I compare the results using ordinary least squares with three alternative methods of estimation, including instrumental variables, and the methods used by Dale and Krueger [(2002).…
Visual illusion in mass estimation of cut food.
Wada, Yuji; Tsuzuki, Daisuke; Kobayashi, Naoki; Hayakawa, Fumiyo; Kohyama, Kaoru
2007-07-01
We investigated the effect of the appearance of cut food on visual mass estimation. In this experiment, we manipulated the shape (e.g., a block, fine strips, or small cubes) of food samples of various masses, and presented them on a CRT display as stimuli. Eleven subjects participated in tasks to choose the picture of the food sample which they felt indicated a target mass. We used raw carrots and surimi (ground fish) gel as hard and soft samples, respectively. The results clearly confirm an existence of an illusion, and this indicates that the appearance of food interferes with visual mass estimation. Specifically, participants often overestimated the mass of finely cut food, especially fine strips, whereas they could accurately estimate the mass of block samples, regardless of the physical characteristics of the foods. The overestimation of the mass of cut food increased with the food's actual mass, and was particularly obvious with increases of apparent volume when cut into fine strips. These results suggest that the apparent volume of a food sample effects the visual estimation of its mass. Hence we can conclude that there are illusions associated with the visual presentation of food that may influence various food impressions, including satisfaction and eating behaviour.
Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu
2016-01-01
This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method. PMID:27258271
Neighborhood Effects in a Behavioral Randomized Controlled Trial
Pruitt, Sandi L.; Leonard, Tammy; Murdoch, James; Hughes, Amy; McQueen, Amy; Gupta, Samir
2015-01-01
Randomized controlled trials (RCTs) of interventions intended to modify health behaviors may be influenced by neighborhood effects which can impede unbiased estimation of intervention effects. Examining a RCT designed to increase colorectal cancer (CRC) screening (N=5,628), we found statistically significant neighborhood effects: average CRC test use among neighboring study participants was significantly and positively associated with individual patient’s CRC test use. This potentially important spatially-varying covariate has not previously been considered in a RCT. Our results suggest that future RCTs of health behavior interventions should assess potential social interactions between participants, which may cause intervention arm contamination and may bias effect size estimation. PMID:25456014
Skrbinšek, Tomaž; Jelenčič, Maja; Waits, Lisette; Kos, Ivan; Jerina, Klemen; Trontelj, Peter
2012-02-01
The effective population size (N(e) ) could be the ideal parameter for monitoring populations of conservation concern as it conveniently summarizes both the evolutionary potential of the population and its sensitivity to genetic stochasticity. However, tracing its change through time is difficult in natural populations. We applied four new methods for estimating N(e) from a single sample of genotypes to trace temporal change in N(e) for bears in the Northern Dinaric Mountains. We genotyped 510 bears using 20 microsatellite loci and determined their age. The samples were organized into cohorts with regard to the year when the animals were born and yearly samples with age categories for every year when they were alive. We used the Estimator by Parentage Assignment (EPA) to directly estimate both N(e) and generation interval for each yearly sample. For cohorts, we estimated the effective number of breeders (N(b) ) using linkage disequilibrium, sibship assignment and approximate Bayesian computation methods and extrapolated these estimates to N(e) using the generation interval. The N(e) estimate by EPA is 276 (183-350 95% CI), meeting the inbreeding-avoidance criterion of N(e) > 50 but short of the long-term minimum viable population goal of N(e) > 500. The results obtained by the other methods are highly consistent with this result, and all indicate a rapid increase in N(e) probably in the late 1990s and early 2000s. The new single-sample approaches to the estimation of N(e) provide efficient means for including N(e) in monitoring frameworks and will be of great importance for future management and conservation. © 2012 Blackwell Publishing Ltd.
Performance of Trajectory Models with Wind Uncertainty
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.
2009-01-01
Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.
An application of model-fitting procedures for marginal structural models.
Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B
2005-08-15
Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.
2013-01-01
Background Two of the most prevalent causes of severe bacterial meningitis in children, Haemophilus influenzae type B (Hib) and Streptococcus pneumoniae, are preventable by existing vaccines increasingly available in developing countries. Our objective was to estimate the dose-specific effect of Hib and pneumococcal conjugate vaccines (PCV) on childhood meningitis mortality in low-income countries for use in the Lives Saved Tool (LiST). Methods We systematically searched and reviewed published vaccine efficacy trials and observational studies reporting the effect of Hib or PCV vaccines on organism-specific meningitis, bacterial meningitis and all-cause meningitis incidence and mortality among children less than five years old in low- and middle-income countries. Data collection and quality assessments were performed using standardized guidelines. For outcomes available across multiple studies (≥2) and approximating meningitis mortality, we pooled estimates reporting dose-specific effects using random effects meta-analytic methods, then combined these with meningitis etiology data to determine the preventable fraction of childhood meningitis mortality for inclusion in LiST. Results We identified 18 studies of Hib conjugate vaccines reporting relevant meningitis morbidity and mortality outcomes (2 randomized controlled trials [RCTs], 16 observational studies) but few provided dose-specific effects. A meta-analysis of four case-control studies examined the dose-specific effect of Hib conjugate vaccines on Hib meningitis morbidity (1 dose: RR=0.64, 95% CI 0.38-1.06; 2 doses: RR=0.09, 95% CI 0.03-0.27; 3 doses: RR=0.06, 95% CI 0.02-0.22), consistent with results from single RCTs. Pooled estimates of two RCTs provided evidence for the effect of three doses of PCV on vaccine-serotype meningitis morbidity (RR=0.16, 95% CI 0.02-1.20). We considered these outcomes of severe disease as proxy estimates for meningitis mortality and combined the estimates of protective effects with meningitis etiology data to provide an estimate of the preventable fraction of childhood meningitis mortality with three doses of Hib (38-43%) and pneumococcal conjugate vaccines (28-35%) for use in LiST. Conclusions Few RCTs or vaccine effectiveness studies evaluated the dose-specific impact of Hib and PCV vaccines on childhood meningitis mortality, necessitating use of proxy measures to estimate population impact in LiST. Our analysis indicates that approximately three-quarters of meningitis deaths are preventable with existing Hib and PCV vaccines. PMID:24564188
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
Liinamo, A E; Karjalainen, L; Ojala, M; Vilva, V
1997-03-01
Data from field trials of Finnish Hounds between 1988 and 1992 in Finland were used to estimate genetic parameters and environmental effects for measures of hunting performance using REML procedures and an animal model. The original data set included 28,791 field trial records from 5,666 dogs. Males and females had equal hunting performance, whereas experience acquired by age improved trial results compared with results for young dogs (P < .001). Results were mostly better on snow than on bare ground (P < .001), and testing areas, years, months, and their interactions affected results (P < .001). Estimates of heritabilities and repeatabilities were low for most of the 28 measures, mainly due to large residual variances. The highest heritabilities were for frequency of tonguing (h2 = .15), pursuit score (h2 = .13), tongue score (h2 = .13), ghost trailing score (h2 = .12), and merit and final score (both h2 = .11). Estimates of phenotypic and genetic correlations were positive and moderate or high for search scores, pursuit scores, and final scores but lower for other studied measures. The results suggest that, due to low heritabilities, evaluation of breeding values for Finnish Hounds with respect to their hunting ability should be based on animal model BLUP methods instead of mere performance testing. The evaluation system of field trials should also be revised for more reliability.
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Borquis, Rusbel Raul Aspilcueta; Neto, Francisco Ribeiro de Araujo; Baldi, Fernando; Hurtado-Lugo, Naudin; de Camargo, Gregório M F; Muñoz-Berrocal, Milthon; Tonhati, Humberto
2013-09-01
In this study, genetic parameters for test-day milk, fat, and protein yield were estimated for the first lactation. The data analyzed consisted of 1,433 first lactations of Murrah buffaloes, daughters of 113 sires from 12 herds in the state of São Paulo, Brazil, with calvings from 1985 to 2007. Ten-month classes of lactation days were considered for the test-day yields. The (co)variance components for the 3 traits were estimated using the regression analyses by Bayesian inference applying an animal model by Gibbs sampling. The contemporary groups were defined as herd-year-month of the test day. In the model, the random effects were additive genetic, permanent environment, and residual. The fixed effects were contemporary group and number of milkings (1 or 2), the linear and quadratic effects of the covariable age of the buffalo at calving, as well as the mean lactation curve of the population, which was modeled by orthogonal Legendre polynomials of fourth order. The random effects for the traits studied were modeled by Legendre polynomials of third and fourth order for additive genetic and permanent environment, respectively, the residual variances were modeled considering 4 residual classes. The heritability estimates for the traits were moderate (from 0.21-0.38), with higher estimates in the intermediate lactation phase. The genetic correlation estimates within and among the traits varied from 0.05 to 0.99. The results indicate that the selection for any trait test day will result in an indirect genetic gain for milk, fat, and protein yield in all periods of the lactation curve. The accuracy associated with estimated breeding values obtained using multi-trait random regression was slightly higher (around 8%) compared with single-trait random regression. This difference may be because to the greater amount of information available per animal. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Population dynamics of HIV-1 inferred from gene sequences.
Grassly, N C; Harvey, P H; Holmes, E C
1999-01-01
A method for the estimation of population dynamic history from sequence data is described and used to investigate the past population dynamics of HIV-1 subtypes A and B. Using both gag and env gene alignments the effective population size of each subtype is estimated and found to be surprisingly small. This may be a result of the selective sweep of mutations through the population, or may indicate an important role of genetic drift in the fixation of mutations. The implications of these results for the spread of drug-resistant mutations and transmission dynamics, and also the roles of selection and recombination in shaping HIV-1 genetic diversity, are discussed. A larger estimated effective population size for subtype A may be the result of differences in time of origin, transmission dynamics, and/or population structure. To investigate the importance of population structure a model of population subdivision was fitted to each subtype, although the improvement in likelihood was found to be nonsignificant. PMID:9927440
Cramer-Rao bound analysis of wideband source localization and DOA estimation
NASA Astrophysics Data System (ADS)
Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung
2002-12-01
In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.
Near real-time estimation of burned area using VIIRS 375 m active fire product
NASA Astrophysics Data System (ADS)
Oliva, P.; Schroeder, W.
2016-12-01
Every year, more than 300 million hectares of land burn globally, causing significant ecological and economic consequences, and associated climatological effects as a result of fire emissions. In recent decades, burned area estimates generated from satellite data have provided systematic global information for ecological analysis of fire impacts, climate and carbon cycle models, and fire regimes studies, among many others. However, there is still need of near real-time burned area estimations in order to assess the impacts of fire and estimate smoke and emissions. The enhanced characteristics of the Visible Infrared Imaging Radiometer Suite (VIIRS) 375 m channels on board the Suomi National Polar-orbiting Partnesship (S-NPP) make possible the use of near real-time active fire detection data for burned area estimation. In this study, consecutive VIIRS 375 m active fire detections were aggregated to produce the VIIRS 375 m burned area (BA) estimation over ten ecologically diverse study areas. The accuracy of the BA estimations was assessed by comparison with Landsat-8 supervised burned area classification. The performance of the VIIRS 375 m BA estimates was dependent on the ecosystem characteristics and fire behavior. Higher accuracy was observed in forested areas characterized by large long-duration fires, while grasslands, savannas and agricultural areas showed the highest omission and commission errors. Complementing those analyses, we performed the burned area estimation of the largest fires in Oregon and Washington states during 2015 and the Fort McMurray fire in Canada 2016. The results showed good agreement with NIROPs airborne fire perimeters proving that the VIIRS 375 m BA estimations can be used for near real-time assessments of fire effects.
SURE Estimates for a Heteroscedastic Hierarchical Model
Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.
2014-01-01
Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976
Congdon, Peter
2009-01-01
Background Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. Methods A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. Results To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Conclusion Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables. PMID:19183458
Schäffer, Beat; Pieren, Reto; Mendolia, Franco; Basner, Mathias; Brink, Mark
2017-05-01
Noise exposure-response relationships are used to estimate the effects of noise on individuals or a population. Such relationships may be derived from independent or repeated binary observations, and modeled by different statistical methods. Depending on the method by which they were established, their application in population risk assessment or estimation of individual responses may yield different results, i.e., predict "weaker" or "stronger" effects. As far as the present body of literature on noise effect studies is concerned, however, the underlying statistical methodology to establish exposure-response relationships has not always been paid sufficient attention. This paper gives an overview on two statistical approaches (subject-specific and population-averaged logistic regression analysis) to establish noise exposure-response relationships from repeated binary observations, and their appropriate applications. The considerations are illustrated with data from three noise effect studies, estimating also the magnitude of differences in results when applying exposure-response relationships derived from the two statistical approaches. Depending on the underlying data set and the probability range of the binary variable it covers, the two approaches yield similar to very different results. The adequate choice of a specific statistical approach and its application in subsequent studies, both depending on the research question, are therefore crucial.
Does Mother Know Best? Treatment Adherence as a Function of Anticipated Treatment Benefit
Glymour, M. Maria; Nguyen, Quynh; Matsouaka, Roland; Tchetgen Tchetgen, Eric J.; Schmidt, Nicole M.; Osypuk, Theresa L.
2016-01-01
Background We describe bias resulting from individualized treatment selection, which occurs when treatment has heterogeneous effects and individuals selectively choose treatments of greatest benefit to themselves. This pernicious bias may confound estimates from observational studies and lead to important misinterpretation of intent-to-treat analyses of randomized trials. Despite the potentially serious threat to inferences, individualized treatment selection has rarely been formally described or assessed. Methods The Moving to Opportunity (MTO) trial randomly assigned subsidized rental vouchers to low-income families in high-poverty public housing. We assessed the Kessler-6 psychological distress and Behavior Problems Index outcomes for 2,829 adolescents 4–7 years after randomization. Among families randomly assigned to receive vouchers, we estimated probability of moving (treatment), predicted by pre-randomization characteristics (c-statistic=0.63). We categorized families into tertiles of this estimated probability of moving, and compared instrumental variable effect estimates for moving on Behavior Problems Index and Kessler-6 across tertiles. Results Instrumental variable estimated effects of moving on behavioral problems index were most adverse for boys least likely to move (b=0.93; 95% CI: 0.33, 1.53) compared to boys most likely to move (b=0.14; 95% CI: −0.15, 0.44; p=.02 for treatment*tertile interaction). Effects on Kessler-6 were more beneficial for girls least likely to move compared to girls most likely to move (−0.62 vs. 0.02; interaction p=.03). Conclusions Evidence of Individualized treatment selection differed by child gender and outcome and should be evaluated in randomized trial reports, especially when heterogeneous treatment effects are likely and non-adherence is common. PMID:26628424
The evaluation of the National Long Term Care Demonstration. 2. Estimation methodology.
Brown, R S
1988-01-01
Channeling effects were estimated by comparing the post-application experience of the treatment and control groups using multiple regression. A variety of potential threats to the validity of the results, including sample composition issues, data issues, and estimation issues, were identified and assessed. Of all the potential problems examined, the only one determined to be likely to cause widespread distortion of program impact estimates was noncomparability of the baseline data. To avoid this distortion, baseline variables judged to be noncomparably measured were excluded from use as control variables in the regression equation. (Where they existed, screen counterparts to these noncomparable baseline variables were used as substitutes.) All of the other potential problems with the sample, data, or regression estimation approach were found to have little or no actual effect on impact estimates or their interpretation. Broad implementation of special procedures, therefore, was not necessary. The study did find that, because of the frequent use of proxy respondents, the estimated effects of channeling on clients' well-being actually may reflect impacts on the well-being of the informal caregiver rather than the client. This and other isolated cases in which there was some evidence of a potential problem for specific outcome variables were identified and examined in detail in technical reports dealing with those outcomes. Where appropriate, alternative estimates were presented. PMID:3130329
Estimating the effectiveness of further sampling in species inventories
Keating, K.A.; Quinn, J.F.; Ivie, M.A.; Ivie, L.L.
1998-01-01
Estimators of the number of additional species expected in the next ??n samples offer a potentially important tool for improving cost-effectiveness of species inventories but are largely untested. We used Monte Carlo methods to compare 11 such estimators, across a range of community structures and sampling regimes, and validated our results, where possible, using empirical data from vascular plant and beetle inventories from Glacier National Park, Montana, USA. We found that B. Efron and R. Thisted's 1976 negative binomial estimator was most robust to differences in community structure and that it was among the most accurate estimators when sampling was from model communities with structures resembling the large, heterogeneous communities that are the likely targets of major inventory efforts. Other estimators may be preferred under specific conditions, however. For example, when sampling was from model communities with highly even species-abundance distributions, estimates based on the Michaelis-Menten model were most accurate; when sampling was from moderately even model communities with S=10 species or communities with highly uneven species-abundance distributions, estimates based on Gleason's (1922) species-area model were most accurate. We suggest that use of such methods in species inventories can help improve cost-effectiveness by providing an objective basis for redirecting sampling to more-productive sites, methods, or time periods as the expectation of detecting additional species becomes unacceptably low.
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Garcés-Vega, Francisco; Marks, Bradley P
2014-08-01
In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.
A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys.
Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao
2016-01-01
In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3-14 cm) and of head rotation (35-43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing pharmacological interventions.
Principal axes estimation using the vibration modes of physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2008-06-01
This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.
Structural estimation of a principal-agent model: moral hazard in medical insurance.
Vera-Hernández, Marcos
2003-01-01
Despite the importance of principal-agent models in the development of modern economic theory, there are few estimations of these models. I recover the estimates of a principal-agent model and obtain an approximation to the optimal contract. The results show that out-of-pocket payments follow a concave profile with respect to costs of treatment. I estimate the welfare loss due to moral hazard, taking into account income effects. I also propose a new measure of moral hazard based on the conditional correlation between contractible and noncontractible variables.
The design of nonlinear observers for wind turbine dynamic state and parameter estimation
NASA Astrophysics Data System (ADS)
Ritter, B.; Schild, A.; Feldt, M.; Konigorski, U.
2016-09-01
This contribution addresses the dynamic state and parameter estimation problem which arises with more advanced wind turbine controllers. These control devices need precise information about the system's current state to outperform conventional industrial controllers effectively. First, the necessity of a profound scientific treatment on nonlinear observers for wind turbine application is highlighted. Secondly, the full estimation problem is introduced and the variety of nonlinear filters is discussed. Finally, a tailored observer architecture is proposed and estimation results of an illustrative application example from a complex simulation set-up are presented.
The effect of oral rehydration solution and recommended home fluids on diarrhoea mortality
Munos, Melinda K.; Walker, Christa L Fischer; Black, Robert E
2010-01-01
Background Most diarrhoeal deaths can be prevented through the prevention and treatment of dehydration. Oral rehydration solution (ORS) and recommended home fluids (RHFs) have been recommended since 1970s and 1980s to prevent and treat diarrhoeal dehydration. We sought to estimate the effects of these interventions on diarrhoea mortality in children aged <5 years. Methods We conducted a systematic review to identify studies evaluating the efficacy and effectiveness of ORS and RHFs and abstracted study characteristics and outcome measures into standardized tables. We categorized the evidence by intervention and outcome, conducted meta-analyses for all outcomes with two or more data points and graded the quality of the evidence supporting each outcome. The CHERG Rules for Evidence Review were used to estimate the effectiveness of ORS and RHFs against diarrhoea mortality. Results We identified 205 papers for abstraction, of which 157 were included in the meta-analyses of ORS outcomes and 12 were included in the meta-analyses of RHF outcomes. We estimated that ORS may prevent 93% of diarrhoea deaths. Conclusions ORS is effective against diarrhoea mortality in home, community and facility settings; however, there is insufficient evidence to estimate the effectiveness of RHFs against diarrhoea mortality. PMID:20348131
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Eckermann, Simon; Coory, Michael; Willan, Andrew R
2011-02-01
Economic analysis and assessment of net clinical benefit often requires estimation of absolute risk difference (ARD) for binary outcomes (e.g. survival, response, disease progression) given baseline epidemiological risk in a jurisdiction of interest and trial evidence of treatment effects. Typically, the assumption is made that relative treatment effects are constant across baseline risk, in which case relative risk (RR) or odds ratios (OR) could be applied to estimate ARD. The objective of this article is to establish whether such use of RR or OR allows consistent estimates of ARD. ARD is calculated from alternative framing of effects (e.g. mortality vs survival) applying standard methods for translating evidence with RR and OR. For RR, the RR is applied to baseline risk in the jurisdiction to estimate treatment risk; for OR, the baseline risk is converted to odds, the OR applied and the resulting treatment odds converted back to risk. ARD is shown to be consistently estimated with OR but changes with framing of effects using RR wherever there is a treatment effect and epidemiological risk differs from trial risk. Additionally, in indirect comparisons, ARD is shown to be consistently estimated with OR, while calculation with RR allows inconsistency, with alternative framing of effects in the direction, let alone the extent, of ARD. OR ensures consistent calculation of ARD in translating evidence from trial settings and across trials in direct and indirect comparisons, avoiding inconsistencies from RR with alternative outcome framing and associated biases. These findings are critical for consistently translating evidence to inform economic analysis and assessment of net clinical benefit, as translation of evidence is proposed precisely where the advantages of OR over RR arise.
Schwindt, Adam R; Winkelman, Dana L
2016-09-01
Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L(-1) and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.
Cost-Effectiveness of Dapagliflozin versus Acarbose as a Monotherapy in Type 2 Diabetes in China
Gu, Shuyan; Mu, Yiming; Zhai, Suodi; Zeng, Yuhang; Zhen, Xuemei; Dong, Hengjin
2016-01-01
Objective To estimate the long-term cost-effectiveness of dapagliflozin versus acarbose as monotherapy in treatment-naïve patients with type 2 diabetes mellitus (T2DM) in China. Methods The Cardiff Diabetes Model, an economic model designed to evaluate the cost-effectiveness of comparator therapies in diabetes was used to simulate disease progression and estimate the long-term effect of treatments on patients. Systematic literature reviews, hospital surveys, meta-analysis and indirect treatment comparison were conducted to obtain model-required patient profiles, clinical data and costs. Health insurance costs (2015¥) were estimated over 40 years from a healthcare payer perspective. Univariate and probabilistic sensitivity analyses were performed. Results The model predicted that dapagliflozin had lower incidences of cardiovascular events, hypoglycemia and mortality events, was associated with a mean incremental benefit of 0.25 quality-adjusted life-years (QALYs) and with a lower cost of ¥8,439 compared with acarbose. This resulted in a cost saving of ¥33,786 per QALY gained with dapagliflozin. Sensitivity analyses determined that the results are robust. Conclusion Dapagliflozin is dominant compared with acarbose as monotherapy for Chinese T2DM patients, with a little QALY gain and lower costs. Dapagliflozin offers a well-tolerated and cost-effective alternative medication for treatment-naive patients in China, and may have a direct impact in reducing the disease burden of T2DM. PMID:27806087
Kalia, Sumeet; Klar, Neil; Donner, Allan
2016-12-30
Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time-to-event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time-to-event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time-to-event outcomes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The particulate-related health benefits of reducing power plant emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.
The report estimates the adverse human health effects due to exposure to particulate matter from power plants. Power plants are significant emitters of sulfur dioxide and nitrogen oxides. In many parts of the U.S., especially the Midwest, power plants are the largest contributors. These gases are harmful themselves, and they contribute to the formation of acid rain and particulate matter. Particulate matter reduces visibility, often producing a milky haze that blankets wide regions, and it is a serious public health problem. Over the past decade and more, numerous studies have linked particulate matter to a wide range of adverse healthmore » effects in people of all ages. Epidemiologists have consistently linked particulate matter with effects ranging from premature death, hospital admissions and asthma attacks to chronic bronchitis. This study documents the health impacts from power plant air pollution emissions. Using the best available emissions and air quality modeling programs, the stud y forecasts ambient air quality for a business-as-usual baseline scenario for 2007, assuming full implementation of the Acid Rain program and the U.S. Environmental Protection Agency's (EPA) Summer Smog rule (the 1999 NO{sub x} SIP Call). The study then estimates the attributable health impacts from all power plant emissions. Finally, the study estimates air quality for a specific policy alternative: reducing total power plant emissions of SO{sub 2} and NO{sub x} 75 percent form the levels emitted in 1997. The difference between this '75 percent reduction scenario' and the baseline provides an estimate of the health effects that would be avoided by this reduction in power plant emissions. In addition to the policy scenario, the work involved performing sensitivity analyses to examine alternative emission reductions and forecast ambient air quality using a second air quality model. EPA uses both air quality models extensively, and both suggest that power plants make a large contribution to ambient particulate matter levels in the Eastern U.S. To put the power plant results in context, air pollution from all on-road and off-road diesel engine emissions was also examined. The results suggest that both power plants and diesel engines make a large contribution to ambient particulate matter levels and the associated health effects. Chapter 2 describes the development of the emissions inventory. Chapter 3 describes the methods used to estimate changes in particulate matter concentrations. Chapter 4 describes general issues arising in estimating and valuing changes in adverse health effects associated with changes in particulate matter. Chapter 5 describes in some detail the methods used for estimating and valuing adverse health effects, and in Chapter 6, the results of the various analyses are presented. The study includes 6 appendices. Appendix A provides results of this analysis for all metropolitan areas in the U.S. and a list of the counties in each metropolitan area. Appendices B, C and D present a detailed examination of how the pollution emission estimates were derived and then translated into forecasts of ambient particulate matter levels.« less
Rath, J J; Veluvolu, K C; Defoort, M
2014-01-01
The estimation of road excitation profile is important for evaluation of vehicle stability and vehicle suspension performance for autonomous vehicle control systems. In this work, the nonlinear dynamics of the active automotive system that is excited by the unknown road excitation profile are considered for modeling. To address the issue of estimation of road profile, we develop an adaptive supertwisting observer for state and unknown road profile estimation. Under Lipschitz conditions for the nonlinear functions, the convergence of the estimation error is proven. Simulation results with Ford Fiesta MK2 demonstrate the effectiveness of the proposed observer for state and unknown input estimation for nonlinear active suspension system.
Rath, J. J.; Veluvolu, K. C.; Defoort, M.
2014-01-01
The estimation of road excitation profile is important for evaluation of vehicle stability and vehicle suspension performance for autonomous vehicle control systems. In this work, the nonlinear dynamics of the active automotive system that is excited by the unknown road excitation profile are considered for modeling. To address the issue of estimation of road profile, we develop an adaptive supertwisting observer for state and unknown road profile estimation. Under Lipschitz conditions for the nonlinear functions, the convergence of the estimation error is proven. Simulation results with Ford Fiesta MK2 demonstrate the effectiveness of the proposed observer for state and unknown input estimation for nonlinear active suspension system. PMID:24683321
Combining QMRA and Epidemiology to Estimate Campylobacteriosis Incidence.
Evers, Eric G; Bouwknegt, Martijn
2016-10-01
The disease burden of pathogens as estimated by QMRA (quantitative microbial risk assessment) and EA (epidemiological analysis) often differs considerably. This is an unsatisfactory situation for policymakers and scientists. We explored methods to obtain a unified estimate using campylobacteriosis in the Netherlands as an example, where previous work resulted in estimates of 4.9 million (QMRA) and 90,600 (EA) cases per year. Using the maximum likelihood approach and considering EA the gold standard, the QMRA model could produce the original EA estimate by adjusting mainly the dose-infection relationship. Considering QMRA the gold standard, the EA model could produce the original QMRA estimate by adjusting mainly the probability that a gastroenteritis case is caused by Campylobacter. A joint analysis of QMRA and EA data and models assuming identical outcomes, using a frequentist or Bayesian approach (using vague priors), resulted in estimates of 102,000 or 123,000 campylobacteriosis cases per year, respectively. These were close to the original EA estimate, and this will be related to the dissimilarity in data availability. The Bayesian approach further showed that attenuating the condition of equal outcomes immediately resulted in very different estimates of the number of campylobacteriosis cases per year and that using more informative priors had little effect on the results. In conclusion, EA was dominant in estimating the burden of campylobacteriosis in the Netherlands. However, it must be noted that only statistical uncertainties were taken into account here. Taking all, usually difficult to quantify, uncertainties into account might lead to a different conclusion. © 2016 Society for Risk Analysis.
Increasing precision of turbidity-based suspended sediment concentration and load estimates.
Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E
2010-01-01
Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.
NASA Technical Reports Server (NTRS)
Bamber, M J; House, R O
1937-01-01
Report presents the results of tests of a 1/10-scale model of the XN2Y-1 airplane tested in the NACA 5-foot vertical wind tunnel in which the six components of forces and moments were measured. The model was tested in 17 attitudes in which the full-scale airplane had been observed to spin, in order to determine the effects of scale, tunnel, and interference. In addition, a series of tests was made to cover the range of angles of attack, angles of sideslip, rates of rotation, and control setting likely to be encountered by a spinning airplane. The data were used to estimate the probable attitudes in steady spins of an airplane in flight and of a model in the free-spinning tunnel. The estimated attitudes of steady spin were compared with attitudes measured in flight and in the spinning tunnel. The results indicate that corrections for certain scale and tunnel effects are necessary to estimate full-scale spinning attitudes from model results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
A robust measure of HIV-1 population turnover within chronically infected individuals.
Achaz, G; Palmer, S; Kearney, M; Maldarelli, F; Mellors, J W; Coffin, J M; Wakeley, J
2004-10-01
A simple nonparameteric test for population structure was applied to temporally spaced samples of HIV-1 sequences from the gag-pol region within two chronically infected individuals. The results show that temporal structure can be detected for samples separated by about 22 months or more. The performance of the method, which was originally proposed to detect geographic structure, was tested for temporally spaced samples using neutral coalescent simulations. Simulations showed that the method is robust to variation in samples sizes and mutation rates, to the presence/absence of recombination, and that the power to detect temporal structure is high. By comparing levels of temporal structure in simulations to the levels observed in real data, we estimate the effective intra-individual population size of HIV-1 to be between 10(3) and 10(4) viruses, which is in agreement with some previous estimates. Using this estimate and a simple measure of sequence diversity, we estimate an effective neutral mutation rate of about 5 x 10(-6) per site per generation in the gag-pol region. The definition and interpretation of estimates of such "effective" population parameters are discussed.
Balaev, Mikhail
2014-07-01
The author examines how time delayed effects of economic development, education, and gender equality influence political democracy. Literature review shows inadequate understanding of lagged effects, which raises methodological and theoretical issues with the current quantitative studies of democracy. Using country-years as a unit of analysis, the author estimates a series of OLS PCSE models for each predictor with a systematic analysis of the distributions of the lagged effects. The second set of multiple OLS PCSE regressions are estimated including all three independent variables. The results show that economic development, education, and gender have three unique trajectories of the time-delayed effects: Economic development has long-term effects, education produces continuous effects regardless of the timing, and gender equality has the most prominent immediate and short term effects. The results call for the reassessment of model specifications and theoretical setups in the quantitative studies of democracy. Copyright © 2014 Elsevier Inc. All rights reserved.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Kaspar, Kai
2013-01-01
Peoples' perception of diseases and pharmaceutical drugs is a critical issue in health research. Beliefs about disease severity influence the compliance with recommendations for convalescence and the motivation to perform proper health-behavior. The estimated effectiveness of drugs and severity of side effects influence medication adherence and contribute to placebo effects. The present paper closes the gap between these effects and the concept of embodied cognition from a metaphor-enriched perspective. In five studies, we demonstrate that the bodily sensation of weight influences our evaluations of diseases and drugs. The experience of heaviness enhanced the estimated seriousness of diseases and the estimated effectiveness of drugs. The perceived seriousness of drug side effects was also affected by weight but only when drug effectiveness was not attended to. Moreover, the incidental sensation of weight shows a novel effect when evaluating weight-related drugs. The results are in line with the idea of embodied metaphors and reveal important boundary conditions which contribute to a better understanding of the underlying mechanisms. PMID:24244302
Estimating the cost-effectiveness of vaccination against herpes zoster in England and Wales.
van Hoek, A J; Gay, N; Melegaro, A; Opstelten, W; Edmunds, W J
2009-02-25
A live-attenuated vaccine against herpes zoster (HZ) has been approved for use, on the basis of a large-scale clinical trial that suggests that the vaccine is safe and efficacious. This study uses a Markov cohort model to estimate whether routine vaccination of the elderly (60+) would be cost-effective, when compared with other uses of health care resources. Vaccine efficacy parameters are estimated by fitting a model to clinical trial data. Estimates of QALY losses due to acute HZ and post-herpetic neuralgia were derived by fitting models to data on the duration of pain by severity and the QoL detriment associated with different severity categories, as reported in a number of different studies. Other parameters (such as cost and incidence estimates) were based on the literature, or UK data sources. The results suggest that vaccination of 65 year olds is likely to be cost-effective (base-case ICER=pound20,400 per QALY gained). If the vaccine does offer additional protection against either the severity of disease or the likelihood of developing PHN (as suggested by the clinical trial), then vaccination of all elderly age groups is highly likely to be deemed cost-effective. Vaccination at either 65 or 70 years (depending on assumptions of the vaccine action) is most cost-effective. Including a booster dose at a later age is unlikely to be cost-effective.
Maas, Iris L; Nolte, Sandra; Walter, Otto B; Berger, Thomas; Hautzinger, Martin; Hohagen, Fritz; Lutz, Wolfgang; Meyer, Björn; Schröder, Johanna; Späth, Christina; Klein, Jan Philipp; Moritz, Steffen; Rose, Matthias
2017-02-01
To compare treatment effect estimates obtained from a regression discontinuity (RD) design with results from an actual randomized controlled trial (RCT). Data from an RCT (EVIDENT), which studied the effect of an Internet intervention on depressive symptoms measured with the Patient Health Questionnaire (PHQ-9), were used to perform an RD analysis, in which treatment allocation was determined by a cutoff value at baseline (PHQ-9 = 10). A linear regression model was fitted to the data, selecting participants above the cutoff who had received the intervention (n = 317) and control participants below the cutoff (n = 187). Outcome was PHQ-9 sum score 12 weeks after baseline. Robustness of the effect estimate was studied; the estimate was compared with the RCT treatment effect. The final regression model showed a regression coefficient of -2.29 [95% confidence interval (CI): -3.72 to -.85] compared with a treatment effect found in the RCT of -1.57 (95% CI: -2.07 to -1.07). Although the estimates obtained from two designs are not equal, their confidence intervals overlap, suggesting that an RD design can be a valid alternative for RCTs. This finding is particularly important for situations where an RCT may not be feasible or ethical as is often the case in clinical research settings. Copyright © 2016 Elsevier Inc. All rights reserved.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Pham, Quang Duy; Wilson, David P.; Kerr, Cliff C.; Shattock, Andrew J.; Do, Hoa Mai; Duong, Anh Thuy; Nguyen, Long Thanh; Zhang, Lei
2015-01-01
Introduction Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness. Methods We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY). Results Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART) and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300–68,900] new infections and 42,600 [36,100–54,100] deaths, resulting in 401,600 [312,200–496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447–2,747], US$2,344 [1,843–2,765], and US$248 [201–319] for each averted infection, death, and DALY, respectively. Conclusions Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden. PMID:26196290
Use of the Magnetic Field for Improving Gyroscopes’ Biases Estimation
Munoz Diaz, Estefania; de Ponte Müller, Fabian; García Domínguez, Juan Jesús
2017-01-01
An accurate orientation is crucial to a satisfactory position in pedestrian navigation. The orientation estimation, however, is greatly affected by errors like the biases of gyroscopes. In order to minimize the error in the orientation, the biases of gyroscopes must be estimated and subtracted. In the state of the art it has been proposed, but not proved, that the estimation of the biases can be accomplished using magnetic field measurements. The objective of this work is to evaluate the effectiveness of using magnetic field measurements to estimate the biases of medium-cost micro-electromechanical sensors (MEMS) gyroscopes. We carry out the evaluation with experiments that cover both, quasi-error-free turn rate and magnetic measurements and medium-cost MEMS turn rate and magnetic measurements. The impact of different homogeneous magnetic field distributions and magnetically perturbed environments is analyzed. Additionally, the effect of the successful biases subtraction on the orientation and the estimated trajectory is detailed. Our results show that the use of magnetic field measurements is beneficial to the correct biases estimation. Further, we show that different magnetic field distributions affect differently the biases estimation process. Moreover, the biases are likewise correctly estimated under perturbed magnetic fields. However, for indoor and urban scenarios the biases estimation process is very slow. PMID:28398232
Abiko, Hironobu; Furuse, Mitsuya; Takano, Tsuguo
2016-01-01
Objectives: In the use of activated carbon beds as adsorbents for various types of organic vapor in respirator gas filters, water adsorption of the bed and test gas humidity are expected to alter the accuracy in the estimation of breakthrough data. There is increasing interest in the effects of moisture on estimation methods, and this study has investigated the effects with actual breakthrough data. Methods: We prepared several activated carbon beds preconditioned by equilibration with moisture at different relative humidities (RH=40%-70%) and a constant temperature of 20°C. Then, we measured breakthrough curves in the early region of breakthrough time for 10 types of organic vapor, and investigated the effects of moisture on estimation using the Wheeler-Jonas equation, the simulation software NIOSH MultiVapor™ 2.2.3, and RBT (Relative Breakthrough Time) proposed by Tanaka et al. Results: The Wheeler-Jonas equation showed good accordance with breakthrough curves at all RH in this study. However, the correlation coefficient decreased gradually with increasing RH regardless of type of organic vapor. Estimation of breakthrough time by MultiVapor showed good accordance with experimental data at RH=50%. In contrast, it showed discordance at high RH (>50%). RBTs reported previously were consistent with experimental data at RH=50%. On the other hand, the values of RBT changed markedly with increasing RH. Conclusions: The results of each estimation method showed good accordance with experimental data under comparatively dry conditions (RH≤50%). However, there were discrepancies under high humidified conditions, and further studies are warranted. PMID:27725483
Green, Kerry M.; Stuart, Elizabeth A.
2014-01-01
Objective This study provides guidance on how propensity score methods can be combined with moderation analyses (i.e., effect modification) to examine subgroup differences in potential causal effects in non-experimental studies. As a motivating example, we focus on how depression may affect subsequent substance use differently for men and women. Method Using data from a longitudinal community cohort study (N=952) of urban African Americans with assessments in childhood, adolescence, young adulthood and midlife, we estimate the influence of depression by young adulthood on substance use outcomes in midlife, and whether that influence varies by gender. We illustrate and compare five different techniques for estimating subgroup effects using propensity score methods, including separate propensity score models and matching for men and women, a joint propensity score model for men and women with matching separately and together by gender, and a joint male/female propensity score model that includes theoretically important gender interactions with matching separately and together by gender. Results Analyses showed that estimating separate models for men and women yielded the best balance and, therefore, is a preferred technique when subgroup analyses are of interest, at least in this data. Results also showed substance use consequences of depression but no significant gender differences. Conclusions It is critical to prespecify subgroup effects before the estimation of propensity scores and to check balance within subgroups regardless of the type of propensity score model used. Results also suggest that depression may affect multiple substance use outcomes in midlife for both men and women relatively equally. PMID:24731233
On the estimation and detection of the Rees-Sciama effect
NASA Astrophysics Data System (ADS)
Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.
2017-02-01
Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.
Linden, Ariel; Yarnold, Paul R
2016-12-01
Program evaluations often utilize various matching approaches to emulate the randomization process for group assignment in experimental studies. Typically, the matching strategy is implemented, and then covariate balance is assessed before estimating treatment effects. This paper introduces a novel analytic framework utilizing a machine learning algorithm called optimal discriminant analysis (ODA) for assessing covariate balance and estimating treatment effects, once the matching strategy has been implemented. This framework holds several key advantages over the conventional approach: application to any variable metric and number of groups; insensitivity to skewed data or outliers; and use of accuracy measures applicable to all prognostic analyses. Moreover, ODA accepts analytic weights, thereby extending the methodology to any study design where weights are used for covariate adjustment or more precise (differential) outcome measurement. One-to-one matching on the propensity score was used as the matching strategy. Covariate balance was assessed using standardized difference in means (conventional approach) and measures of classification accuracy (ODA). Treatment effects were estimated using ordinary least squares regression and ODA. Using empirical data, ODA produced results highly consistent with those obtained via the conventional methodology for assessing covariate balance and estimating treatment effects. When ODA is combined with matching techniques within a treatment effects framework, the results are consistent with conventional approaches. However, given that it provides additional dimensions and robustness to the analysis versus what can currently be achieved using conventional approaches, ODA offers an appealing alternative. © 2016 John Wiley & Sons, Ltd.
Marseille, Elliot; Giganti, Mark J.; Mwango, Albert; Chisembele-Taylor, Angela; Mulenga, Lloyd; Over, Mead; Kahn, James G.; Stringer, Jeffrey S. A.
2012-01-01
Background We estimated the unit costs and cost-effectiveness of a government ART program in 45 sites in Zambia supported by the Centre for Infectious Disease Research Zambia (CIDRZ). Methods We estimated per person-year costs at the facility level, and support costs incurred above the facility level and used multiple regression to estimate variation in these costs. To estimate ART effectiveness, we compared mortality in this Zambian population to that of a cohort of rural Ugandan HIV patients receiving co-trimoxazole (CTX) prophylaxis. We used micro-costing techniques to estimate incremental unit costs, and calculated cost-effectiveness ratios with a computer model which projected results to 10 years. Results The program cost $69.7 million for 125,436 person-years of ART, or $556 per ART-year. Compared to CTX prophylaxis alone, the program averted 33.3 deaths or 244.5 disability adjusted life-years (DALYs) per 100 person-years of ART. In the base-case analysis, the net cost per DALY averted was $833 compared to CTX alone. More than two-thirds of the variation in average incremental total and on-site cost per patient-year of treatment is explained by eight determinants, including the complexity of the patient-case load, the degree of adherence among the patients, and institutional characteristics including, experience, scale, scope, setting and sector. Conclusions and Significance The 45 sites exhibited substantial variation in unit costs and cost-effectiveness and are in the mid-range of cost-effectiveness when compared to other ART programs studied in southern Africa. Early treatment initiation, large scale, and hospital setting, are associated with statistically significantly lower costs, while others (rural location, private sector) are associated with shifting cost from on- to off-site. This study shows that ART programs can be significantly less costly or more cost-effective when they exploit economies of scale and scope, and initiate patients at higher CD4 counts. PMID:23284843
2012-01-01
Background Generalized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap. Methods Parameters in GAMAR are estimated by maximum partial likelihood using modified Newton’s method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1. Results In the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR. Conclusions GAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies. PMID:23110601
The competition between thermal contraction and differentiation in the stress history of the moon
NASA Astrophysics Data System (ADS)
Kirk, Randolph L.; Stevenson, David J.
1989-09-01
The stress history of the moon is discussed, taking into consideration the effects of thermal contraction and differentiation. The amount of expansion caused by extracting basalt from undifferentiated lunar material is estimated taking account of the uncertainty in the knowledge of the appropriate compositions, and the resulting estimate of the expansion is used to compare the relative importance of the thermal and differentiation effects in the moon's volumetric history. The results of calculations show that differentiation is likely to be of major importance and, thus, thermal expansion is not the sole possible contributor to evolutionary changes in the lunar radius.
NASA Technical Reports Server (NTRS)
Carnes, J. G.; Baird, J. E. (Principal Investigator)
1980-01-01
The classification procedure utilized in making crop proportion estimates for corn and soybeans using remotely sensed data was evaluated. The procedure was derived during the transition year of the Large Area Crop Inventory Experiment. Analysis of variance techniques were applied to classifications performed by 3 groups of analysts who processed 25 segments selected from 4 agrophysical units (APU's). Group and APU effects were assessed to determine factors which affected the quality of the classifications. The classification results were studied to determine the effectiveness of the procedure in producing corn and soybeans proportion estimates.
Comparison of different hydrological similarity measures to estimate flow quantiles
NASA Astrophysics Data System (ADS)
Rianna, M.; Ridolfi, E.; Napolitano, F.
2017-07-01
This paper aims to evaluate the influence of hydrological similarity measures on the definition of homogeneous regions. To this end, several attribute sets have been analyzed in the context of the Region of Influence (ROI) procedure. Several combinations of geomorphological, climatological, and geographical characteristics are also used to cluster potentially homogeneous regions. To verify the goodness of the resulting pooled sites, homogeneity tests arecarried out. Through a Monte Carlo simulation and a jack-knife procedure, flow quantiles areestimated for the regions effectively resulting as homogeneous. The analysis areperformed in both the so-called gauged and ungauged scenarios to analyze the effect of hydrological measures on flow quantiles estimation.
Long-term health and medical cost impact of smoking prevention in adolescence.
Wang, Li Yan; Michael, Shannon L
2015-02-01
To estimate smoking progression probabilities from adolescence to young adulthood and to estimate long-term health and medical cost impacts of preventing smoking in today's adolescents. Using data from the National Longitudinal Study of Adolescent Health (Add Health), we first estimated smoking progression probabilities from adolescence to young adulthood. Then, using the predicted probabilities, we estimated the number of adolescents who were prevented from becoming adult daily smokers as a result of a hypothetical 1 percentage point reduction in the prevalence of ever smoking in today's adolescents. We further estimated lifetime medical costs saved and quality-adjusted life years (QALYs) gained as a result of preventing adolescents from becoming adult daily smokers. All costs were in 2010 dollars. Compared with never smokers, those who had tried smoking at baseline had higher probabilities of becoming current or former daily smokers at follow-up regardless of baseline grade or sex. A hypothetical 1 percentage point reduction in the prevalence of ever smoking in 24.5 million students in 7th-12th grades today could prevent 35,962 individuals from becoming a former daily smoker and 44,318 individuals from becoming a current daily smoker at ages 24-32 years. As a result, lifetime medical care costs are estimated to decrease by $1.2 billion and lifetime QALYs is estimated to increase by 98,590. Effective smoking prevention programs for adolescents go beyond reducing smoking prevalence in adolescence; they also reduce daily smokers in young adulthood, increase QALYs, and reduce medical costs substantially in later life. This finding indicates the importance of continued investment in effective youth smoking prevention programs. Published by Elsevier Inc.
Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo
2016-04-01
We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.
Assessing effects of cholera vaccination in the presence of interference.
Perez-Heydrich, Carolina; Hudgens, Michael G; Halloran, M Elizabeth; Clemens, John D; Ali, Mohammad; Emch, Michael E
2014-09-01
Interference occurs when the treatment of one person affects the outcome of another. For example, in infectious diseases, whether one individual is vaccinated may affect whether another individual becomes infected or develops disease. Quantifying such indirect (or spillover) effects of vaccination could have important public health or policy implications. In this article we use recently developed inverse-probability weighted (IPW) estimators of treatment effects in the presence of interference to analyze an individually-randomized, placebo-controlled trial of cholera vaccination that targeted 121,982 individuals in Matlab, Bangladesh. Because these IPW estimators have not been employed previously, a simulation study was also conducted to assess the empirical behavior of the estimators in settings similar to the cholera vaccine trial. Simulation study results demonstrate the IPW estimators can yield unbiased estimates of the direct, indirect, total, and overall effects of vaccination when there is interference provided the untestable no unmeasured confounders assumption holds and the group-level propensity score model is correctly specified. Application of the IPW estimators to the cholera vaccine trial indicates the presence of interference. For example, the IPW estimates suggest on average 5.29 fewer cases of cholera per 1000 person-years (95% confidence interval 2.61, 7.96) will occur among unvaccinated individuals within neighborhoods with 60% vaccine coverage compared to neighborhoods with 32% coverage. Our analysis also demonstrates how not accounting for interference can render misleading conclusions about the public health utility of vaccination. © 2014, The International Biometric Society.
Total Land Water Storage Change over 2003 - 2013 Estimated from a Global Mass Budget Approach
NASA Technical Reports Server (NTRS)
Dieng, H. B.; Champollion, N.; Cazenave, A.; Wada, Y.; Schrama, E.; Meyssignac, B.
2015-01-01
We estimate the total land water storage (LWS) change between 2003 and 2013 using a global water mass budget approach. Hereby we compare the ocean mass change (estimated from GRACE space gravimetry on the one hand, and from the satellite altimetry-based global mean sea level corrected for steric effects on the other hand) to the sum of the main water mass components of the climate system: glaciers, Greenland and Antarctica ice sheets, atmospheric water and LWS (the latter being the unknown quantity to be estimated). For glaciers and ice sheets, we use published estimates of ice mass trends based on various types of observations covering different time spans between 2003 and 2013. From the mass budget equation, we derive a net LWS trend over the study period. The mean trend amounts to +0.30 +/- 0.18 mm/yr in sea level equivalent. This corresponds to a net decrease of -108 +/- 64 cu km/yr in LWS over the 2003-2013 decade. We also estimate the rate of change in LWS and find no significant acceleration over the study period. The computed mean global LWS trend over the study period is shown to be explained mainly by direct anthropogenic effects on land hydrology, i.e. the net effect of groundwater depletion and impoundment of water in man-made reservoirs, and to a lesser extent the effect of naturally-forced land hydrology variability. Our results compare well with independent estimates of human-induced changes in global land hydrology.
Mean size estimation yields left-side bias: Role of attention on perceptual averaging.
Li, Kuei-An; Yeh, Su-Ling
2017-11-01
The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.
NASA Technical Reports Server (NTRS)
Seshadri, Banavara R.; Smith, Stephen W.
2007-01-01
Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.
Hohl, Corinne Michèle; Nosyk, Bohdan; Sadatsafavi, Mohsen; Anis, Aslam Hayat
2008-01-01
To determine the incremental cost-effectiveness of using propofol versus midazolam for procedural sedation (PS) in adults in the emergency department (ED). The authors conducted a cost-effectiveness analysis from the perspective of the health care provider. The primary outcome was the incremental cost (or savings) to achieve one additional successful sedation with propofol compared to midazolam. A decision model was developed in which the clinical effectiveness and cost of a PS strategy using either agent was estimated. The authors derived estimates of clinical effectiveness and risk of adverse events (AEs) from a systematic review. The cost of each clinical outcome was determined by incorporating the baseline cost of the ED visit, the cost of the drug, the cost of labor of physicians and nurses, the cost and probability of an AE, and the cost and probability of a PS failure. A standard meta-analytic technique was used to calculate the weighted mean difference in recovery times and obtain mean drug doses from patient-level data from a randomized controlled trial. Probabilistic sensitivity analyses were conducted to examine the uncertainty around the estimated incremental cost-effectiveness ratio using Monte Carlo simulation. Choosing a sedation strategy with propofol resulted in average savings of $17.33 (95% confidence interval [CI] = $24.13 to $10.44) per sedation performed. This resulted in an incremental cost-effectiveness ratio of -$597.03 (95% credibility interval -$6,434.03 to $6,113.57) indicating savings of $597.03 per additional successful sedation performed with propofol. This result was driven by shorter recovery times and was robust to all sensitivity analyses performed. These results indicate that using propofol for PS in the ED is a cost-saving strategy.
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
Effects of wing modification on an aircraft's aerodynamic parameters as determined from flight data
NASA Technical Reports Server (NTRS)
Hess, R. A.
1986-01-01
A study of the effects of four wing-leading-edge modifications on a general aviation aircraft's stability and control parameters is presented. Flight data from the basic aircraft configuration and configurations with wing modifications are analyzed to determine each wing geometry's stability and control parameters. The parameter estimates and aerodynamic model forms are obtained using the stepwise regression and maximum likelihood techniques. The resulting parameter estimates and aerodynamic models are verified using vortex-lattice theory and by analysis of each model's ability to predict aircraft behavior. Comparisons of the stability and control derivative estimates from the basic wing and the four leading-edge modifications are accomplished so that the effects of each modification on aircraft stability and control derivatives can be determined.
A Comparison of Agent-Based Models and the Parametric G-Formula for Causal Inference.
Murray, Eleanor J; Robins, James M; Seage, George R; Freedberg, Kenneth A; Hernán, Miguel A
2017-07-15
Decision-making requires choosing from treatments on the basis of correctly estimated outcome distributions under each treatment. In the absence of randomized trials, 2 possible approaches are the parametric g-formula and agent-based models (ABMs). The g-formula has been used exclusively to estimate effects in the population from which data were collected, whereas ABMs are commonly used to estimate effects in multiple populations, necessitating stronger assumptions. Here, we describe potential biases that arise when ABM assumptions do not hold. To do so, we estimated 12-month mortality risk in simulated populations differing in prevalence of an unknown common cause of mortality and a time-varying confounder. The ABM and g-formula correctly estimated mortality and causal effects when all inputs were from the target population. However, whenever any inputs came from another population, the ABM gave biased estimates of mortality-and often of causal effects even when the true effect was null. In the absence of unmeasured confounding and model misspecification, both methods produce valid causal inferences for a given population when all inputs are from that population. However, ABMs may result in bias when extrapolated to populations that differ on the distribution of unmeasured outcome determinants, even when the causal network linking variables is identical. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Austin, Peter C
2018-01-01
Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.
Effect of bandage thickness on interface pressure applied by compression bandages.
Al Khaburi, Jawad; Dehghani-Sanij, Abbas A; Nelson, E Andrea; Hutchinson, Jerry
2012-04-01
Medical compression bandages are widely used in the treatment of chronic venous disorder. In order to design effective compression bandages, researchers have attempted to describe the interface pressure applied by these bandages using mathematical models. This paper reports on the work carried out to derive the mathematical model used to describe the interface pressure applied by single-layer bandage using two different approaches. The first assumes that the bandage thickness is negligible, whereas the second model includes the bandage thickness. The estimated pressures using the two formulae are then compared, simulated over a 3D representation of a real leg and validated experimentally. Both theoretical and experimental results have shown that taking bandage thickness into consideration while estimating the pressures applied by a medical compression bandage will result in more accurate estimation. However, the additional accuracy is clinically insignificant. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y
1997-09-01
This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.
Siegel, P Z; Qualters, J R; Mowery, P D; Campostrini, S; Leutzinger, C; McQueen, D V
2001-01-01
OBJECTIVES: This study investigated whether an apparent downturn in prevalence rates of mammography use reported in the 1992 Behavioral Risk Factor Surveillance System (BRFSS) questionnaire resulted from a change in questionnaire wording. METHODS: In a pretest-posttest design (1990-1991 vs 1992), piecewise linear regression analyses were based on monthly prevalence estimates of mammography use among female BRFSS respondents 40 years or older. RESULTS: Self-reported mammography use was lower by 3.5 percentage points (95% confidence interval [CI] = 1.5, 5.5) overall--and lower by 13.6 percentage points (95% CI = 2.6, 24.6) among Black women with less than a high school education--when predicted from 1992 data than when predicted from 1990-1991 data. CONCLUSIONS: A change in questionnaire wording in the BRFSS caused demographic-specific effects in population-based estimates of mammography use. PMID:11344896
Tian, Guoqiang; Liu, Feng
2011-06-01
Economic literature in developed countries suggests that demand for alcoholic beverages is sensitive to price, with an estimated price elasticity ranging from -0.38 for beer and -0.7 for liquor. However, few studies have been conducted in developing countries. We employ a large individual-level dataset in China to estimate the effects of price on alcohol demand. Using the data from China Health and Nutrition Survey for the years 1993, 1997, 2000, 2004 and 2006, we estimate two-part models of alcohol demand. Results show the price elasticity is virtually zero for beer and only -0.12 for liquor, which is far smaller than those derived from developed countries. Separate regressions by gender reveals the results are mainly driven by men. The central implication of this study is, while alcohol tax increases can raise government revenue, it alone is not an effective policy to reduce alcohol related problems in China.
Urbanization, malaria transmission and disease burden in Africa
Hay, Simon I.; Guerra, Carlos A.; Tatem, Andrew J.; Atkinson, Peter M.; Snow, Robert W.
2011-01-01
Many attempts have been made to quantify Africa’s malaria burden but none has addressed how urbanization will affect disease transmission and outcome, and therefore mortality and morbidity estimates. In 2003, 39% of Africa’s 850 million people lived in urban settings; by 2030, 54% of Africans are expected to do so. We present the results of a series of entomological, parasitological and behavioural meta-analyses of studies that have investigated the effect of urbanization on malaria in Africa. We describe the effect of urbanization on both the impact of malaria transmission and the concomitant improvements in access to preventative and curative measures. Using these data, we have recalculated estimates of populations at risk of malaria and the resulting mortality. We find there were 1,068,505 malaria deaths in Africa in 2000 — a modest 6.7% reduction over previous iterations. The public-health implications of these findings and revised estimates are discussed. PMID:15608702
Scammacca, Nancy; Roberts, Greg; Stuebing, Karla K.
2013-01-01
Previous research has shown that treating dependent effect sizes as independent inflates the variance of the mean effect size and introduces bias by giving studies with more effect sizes more weight in the meta-analysis. This article summarizes the different approaches to handling dependence that have been advocated by methodologists, some of which are more feasible to implement with education research studies than others. A case study using effect sizes from a recent meta-analysis of reading interventions is presented to compare the results obtained from different approaches to dealing with dependence. Overall, mean effect sizes and variance estimates were found to be similar, but estimates of indexes of heterogeneity varied. Meta-analysts are advised to explore the effect of the method of handling dependence on the heterogeneity estimates before conducting moderator analyses and to choose the approach to dependence that is best suited to their research question and their data set. PMID:25309002
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed
2014-01-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less
Impact of Autocorrelation on Functional Connectivity
Arbabshirani, Mohammad R.; Damaraju, Eswar; Phlypo, Ronald; Plis, Sergey; Allen, Elena; Ma, Sai; Mathalon, Daniel; Preda, Adrian; Vaidya, Jatin G.; Adali, Tülay; Calhoun, Vince D.
2014-01-01
Although the impact of serial correlation (autocorrelation) in residuals of general linear models for fMRI time-series has been studied extensively, the effect of autocorrelation on functional connectivity studies has been largely neglected until recently. Some recent studies based on results from economics have questioned the conventional estimation of functional connectivity and argue that not correcting for autocorrelation in fMRI time-series results in “spurious” correlation coefficients. In this paper, first we assess the effect of autocorrelation on Pearson correlation coefficient through theoretical approximation and simulation. Then we present this effect on real fMRI data. To our knowledge this is the first work comprehensively investigating the effect of autocorrelation on functional connectivity estimates. Our results show that although FC values are altered, even following correction for autocorrelation, results of hypothesis testing on FC values remain very similar to those before correction. In real data we show this is true for main effects and also for group difference testing between healthy controls and schizophrenia patients. We further discuss model order selection in the context of autoregressive processes, effects of frequency filtering and propose a preprocessing pipeline for connectivity studies. PMID:25072392
NASA Astrophysics Data System (ADS)
Loranty, Michael M.; Mackay, D. Scott; Ewers, Brent E.; Traver, Elizabeth; Kruger, Eric L.
2010-12-01
We have used an ecosystem model, TREES (Terrestrial Regional Ecosystem Exchange Simulator), to test the hypothesis that competition for light limits reference canopy stomatal conductance (GSref; conductance at 1 kPa vapor pressure deficit) for individual tree crowns. Sap flux (JS) data was collected at an aspen-dominated unmanaged early successional site, and at a sugar maple dominated midsuccessional site managed for timber production. Using a Monte Carlo approach, JS scaled canopy transpiration (EC) estimates were used to parameterize two versions of the model for each tree individually; a control model treated trees as isolated individuals, and a modified version incorporated the shading effects of neighboring individuals on incident radiation. Agreement between simulated and observed EC was better for maple than for aspen using the control model. Accounting for canopy heterogeneity using a three-dimensional canopy representation had minimal effects on estimates of GSref or model performance for individual maples. At the Aspen site the modified model resulted in improved EC estimates, particularly for trees with lower GSref and more shading by neighboring individuals. Our results imply a link between photosynthetic capacity, as mediated by competitive light environment, and GSref. We conclude that accounting for the effects of canopy heterogeneity on incident radiation improves modeled estimates of canopy carbon and water fluxes, especially for shade intolerant species. Furthermore our results imply a link between ecosystem structure and function that may be exploited to elucidate the impacts of forest structural heterogeneity on ecosystem fluxes of carbon and water via LiDAR remote sensing.
Chiu, Su-Chin; Lin, Te-Ming; Lin, Jyh-Miin; Chung, Hsiao-Wen; Ko, Cheng-Wen; Büchert, Martin; Bock, Michael
2017-09-01
To investigate possible errors in T1 and T2 quantification via MR fingerprinting with balanced steady-state free precession readout in the presence of intra-voxel phase dispersion and RF pulse profile imperfections, using computer simulations based on Bloch equations. A pulse sequence with TR changing in a Perlin noise pattern and a nearly sinusoidal pattern of flip angle following an initial 180-degree inversion pulse was employed. Gaussian distributions of off-resonance frequency were assumed for intra-voxel phase dispersion effects. Slice profiles of sinc-shaped RF pulses were computed to investigate flip angle profile influences. Following identification of the best fit between the acquisition signals and those established in the dictionary based on known parameters, estimation errors were reported. In vivo experiments were performed at 3T to examine the results. Slight intra-voxel phase dispersion with standard deviations from 1 to 3Hz resulted in prominent T2 under-estimations, particularly at large T2 values. T1 and off-resonance frequencies were relatively unaffected. Slice profile imperfections led to under-estimations of T1, which became greater as regional off-resonance frequencies increased, but could be corrected by including slice profile effects in the dictionary. Results from brain imaging experiments in vivo agreed with the simulation results qualitatively. MR fingerprinting using balanced SSFP readout in the presence of intra-voxel phase dispersion and imperfect slice profile leads to inaccuracies in quantitative estimations of the relaxation times. Copyright © 2017 Elsevier Inc. All rights reserved.
Ali, Mohammad; You, Young Ae; Sur, Dipika; Kanungo, Suman; Kim, Deok Ryun; Deen, Jacqueline; Lopez, Anna Lena; Wierzba, Thomas F; Bhattacharya, Sujit K; Clemens, John D
2016-01-20
The test-negative design (TND) has emerged as a simple method for evaluating vaccine effectiveness (VE). Its utility for evaluating oral cholera vaccine (OCV) effectiveness is unknown. We examined this method's validity in assessing OCV effectiveness by comparing the results of TND analyses with those of conventional cohort analyses. Randomized controlled trials of OCV were conducted in Matlab (Bangladesh) and Kolkata (India), and an observational cohort design was used in Zanzibar (Tanzania). For all three studies, VE using the TND was estimated from the odds ratio (OR) relating vaccination status to fecal test status (Vibrio cholerae O1 positive or negative) among diarrheal patients enrolled during surveillance (VE= (1-OR)×100%). In cohort analyses of these studies, we employed the Cox proportional hazard model for estimating VE (=1-hazard ratio)×100%). OCV effectiveness estimates obtained using the TND (Matlab: 51%, 95% CI:37-62%; Kolkata: 67%, 95% CI:57-75%) were similar to the cohort analyses of these RCTs (Matlab: 52%, 95% CI:43-60% and Kolkata: 66%, 95% CI:55-74%). The TND VE estimate for the Zanzibar data was 94% (95% CI:84-98%) compared with 82% (95% CI:58-93%) in the cohort analysis. After adjusting for residual confounding in the cohort analysis of the Zanzibar study, using a bias indicator condition, we observed almost no difference in the two estimates. Our findings suggest that the TND is a valid approach for evaluating OCV effectiveness in routine vaccination programs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessment of virus interference in a test-negative study of influenza vaccine effectiveness
Feng, Shuo; Fowlkes, Ashley L.; Steffens, Andrea; Finelli, Lyn; Cowling, Benjamin J.
2017-01-01
Background The observational test-negative study design is used to estimate vaccine effectiveness against influenza virus infection. An important assumption of the test-negative design is that vaccination does not affect the risk of infection with another virus. If such virus interference occurred, detection of other respiratory viruses would be more common among influenza vaccine recipients and vaccine effectiveness estimates could differ. We evaluated the potential for virus interference using data from the Influenza Incidence Surveillance Project. Methods From 2010 to 2013, outpatients presenting to clinics in 13 US jurisdictions with acute respiratory infections were tested for influenza and other respiratory viruses. We investigated whether virus interference might affect vaccine effectiveness estimates by first evaluating the sensitivity of estimates using alternative control groups that include or exclude patients with other respiratory virus detections by age group and early/middle/late stage of influenza seasons. Second, we evaluated the association between influenza vaccination receipt and other respiratory virus detection among influenza test negative patients. Results Influenza was detected in 3,743/10,650 patients (35%), and overall vaccine effectiveness was 47% (95% CI: 42%, 52%). Estimates using each control group were consistent overall or when stratified by age groups, and there were no differences among early, middle, or late phase during influenza season. We found no associations between detection of other respiratory viruses and receipt of influenza vaccination. Conclusions In this 3-year test-negative design study in an outpatient setting in the United States, we found no evidence of virus interference or impact on influenza vaccine effectiveness estimation. PMID:28362642
Instrumental Variable Analysis with a Nonlinear Exposure–Outcome Relationship
Davies, Neil M.; Thompson, Simon G.
2014-01-01
Background: Instrumental variable methods can estimate the causal effect of an exposure on an outcome using observational data. Many instrumental variable methods assume that the exposure–outcome relation is linear, but in practice this assumption is often in doubt, or perhaps the shape of the relation is a target for investigation. We investigate this issue in the context of Mendelian randomization, the use of genetic variants as instrumental variables. Methods: Using simulations, we demonstrate the performance of a simple linear instrumental variable method when the true shape of the exposure–outcome relation is not linear. We also present a novel method for estimating the effect of the exposure on the outcome within strata of the exposure distribution. This enables the estimation of localized average causal effects within quantile groups of the exposure or as a continuous function of the exposure using a sliding window approach. Results: Our simulations suggest that linear instrumental variable estimates approximate a population-averaged causal effect. This is the average difference in the outcome if the exposure for every individual in the population is increased by a fixed amount. Estimates of localized average causal effects reveal the shape of the exposure–outcome relation for a variety of models. These methods are used to investigate the relations between body mass index and a range of cardiovascular risk factors. Conclusions: Nonlinear exposure–outcome relations should not be a barrier to instrumental variable analyses. When the exposure–outcome relation is not linear, either a population-averaged causal effect or the shape of the exposure–outcome relation can be estimated. PMID:25166881
BME Estimation of Residential Exposure to Ambient PM10 and Ozone at Multiple Time Scales
Yu, Hwa-Lung; Chen, Jiu-Chiuan; Christakos, George; Jerrett, Michael
2009-01-01
Background Long-term human exposure to ambient pollutants can be an important contributing or etiologic factor of many chronic diseases. Spatiotemporal estimation (mapping) of long-term exposure at residential areas based on field observations recorded in the U.S. Environmental Protection Agency’s Air Quality System often suffer from missing data issues due to the scarce monitoring network across space and the inconsistent recording periods at different monitors. Objective We developed and compared two upscaling methods: UM1 (data aggregation followed by exposure estimation) and UM2 (exposure estimation followed by data aggregation) for the long-term PM10 (particulate matter with aerodynamic diameter ≤ 10 μm) and ozone exposure estimations and applied them in multiple time scales to estimate PM and ozone exposures for the residential areas of the Health Effects of Air Pollution on Lupus (HEAPL) study. Method We used Bayesian maximum entropy (BME) analysis for the two upscaling methods. We performed spatiotemporal cross-validations at multiple time scales by UM1 and UM2 to assess the estimation accuracy across space and time. Results Compared with the kriging method, the integration of soft information by the BME method can effectively increase the estimation accuracy for both pollutants. The spatiotemporal distributions of estimation errors from UM1 and UM2 were similar. The cross-validation results indicated that UM2 is generally better than UM1 in exposure estimations at multiple time scales in terms of predictive accuracy and lack of bias. For yearly PM10 estimations, both approaches have comparable performance, but the implementation of UM1 is associated with much lower computation burden. Conclusion BME-based upscaling methods UM1 and UM2 can assimilate core and site-specific knowledge bases of different formats for long-term exposure estimation. This study shows that UM1 can perform reasonably well when the aggregation process does not alter the spatiotemporal structure of the original data set; otherwise, UM2 is preferable. PMID:19440491
Adequacy of satellite derived rainfall data for stream flow modeling
Artan, G.; Gadain, Hussein; Smith, Jodie; Asante, Kwasi; Bandaragoda, C.J.; Verdin, J.P.
2007-01-01
Floods are the most common and widespread climate-related hazard on Earth. Flood forecasting can reduce the death toll associated with floods. Satellites offer effective and economical means for calculating areal rainfall estimates in sparsely gauged regions. However, satellite-based rainfall estimates have had limited use in flood forecasting and hydrologic stream flow modeling because the rainfall estimates were considered to be unreliable. In this study we present the calibration and validation results from a spatially distributed hydrologic model driven by daily satellite-based estimates of rainfall for sub-basins of the Nile and Mekong Rivers. The results demonstrate the usefulness of remotely sensed precipitation data for hydrologic modeling when the hydrologic model is calibrated with such data. However, the remotely sensed rainfall estimates cannot be used confidently with hydrologic models that are calibrated with rain gauge measured rainfall, unless the model is recalibrated. ?? Springer Science+Business Media, Inc. 2007.