Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea
2014-05-02
Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.
THOMAS J. BRANDEIS; MARIA DEL ROCIO SUAREZ ROZO
2005-01-01
Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...
Thomas J. Brandeis; Maria Del Rocio; Suarez Rozo
2005-01-01
Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...
2014-01-01
Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453
van Assen, Marcel A L M; van Aert, Robbie C M; Nuijten, Michèle B; Wicherts, Jelte M
2014-01-01
De Winter and Happee examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that "selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective" (p.4). Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Publishing everything is more effective than only reporting significant outcomes.
van Assen, Marcel A. L. M.; van Aert, Robbie C. M.; Nuijten, Michèle B.; Wicherts, Jelte M.
2014-01-01
Background De Winter and Happee [1] examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that “selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective” (p.4). Methods and Findings Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Conclusion Publishing everything is more effective than only reporting significant outcomes. PMID:24465448
A web-based rapid assessment tool for production publishing solutions
NASA Astrophysics Data System (ADS)
Sun, Tong
2010-02-01
Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.
ERIC Educational Resources Information Center
Epstein, Diana; Miller, Raegen T.
2011-01-01
In August 2010 the "Los Angeles Times" published a special report on their website featuring performance ratings for nearly 6,000 Los Angeles Unified School District teachers. The move was controversial because the ratings were based on so-called value-added estimates of teachers' contributions to student learning. As with most…
Wolfson, Julian; Vock, David M; Bandyopadhyay, Sunayan; Kottke, Thomas; Vazquez-Benitez, Gabriela; Johnson, Paul; Adomavicius, Gediminas; O'Connor, Patrick J
2017-04-24
Clinicians who are using the Framingham Risk Score (FRS) or the American College of Cardiology/American Heart Association Pooled Cohort Equations (PCE) to estimate risk for their patients based on electronic health data (EHD) face 4 questions. (1) Do published risk scores applied to EHD yield accurate estimates of cardiovascular risk? (2) Are FRS risk estimates, which are based on data that are up to 45 years old, valid for a contemporary patient population seeking routine care? (3) Do the PCE make the FRS obsolete? (4) Does refitting the risk score using EHD improve the accuracy of risk estimates? Data were extracted from the EHD of 84 116 adults aged 40 to 79 years who received care at a large healthcare delivery and insurance organization between 2001 and 2011. We assessed calibration and discrimination for 4 risk scores: published versions of FRS and PCE and versions obtained by refitting models using a subset of the available EHD. The published FRS was well calibrated (calibration statistic K=9.1, miscalibration ranging from 0% to 17% across risk groups), but the PCE displayed modest evidence of miscalibration (calibration statistic K=43.7, miscalibration from 9% to 31%). Discrimination was similar in both models (C-index=0.740 for FRS, 0.747 for PCE). Refitting the published models using EHD did not substantially improve calibration or discrimination. We conclude that published cardiovascular risk models can be successfully applied to EHD to estimate cardiovascular risk; the FRS remains valid and is not obsolete; and model refitting does not meaningfully improve the accuracy of risk estimates. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954
Fairchild, J.F.; Allert, A.L.; Feltz, K.P.; Nelson, K.J.; Valle, J.A.
2009-01-01
Clopyralid (3,6-dichloro-2-pyridinecarboxylic acid) is a pyridine herbicide frequently used to control invasive, noxious weeds in the northwestern United States. Clopyralid exhibits low acute toxicity to fish, including the rainbow trout (Oncorhynchus mykiss) and the threatened bull trout (Salvelinus confluentus). However, there are no published chronic toxicity data for clopyralid and fish that can be used in ecological risk assessments. We conducted 30-day chronic toxicity studies with juvenile rainbow trout exposed to the acid form of clopyralid. The 30-day maximum acceptable toxicant concentration (MATC) for growth, calculated as the geometric mean of the no observable effect concentration (68 mg/L) and the lowest observable effect concentration (136 mg/L), was 96 mg/L. No mortality was measured at the highest chronic concentration tested (273 mg/L). The acute:chronic ratio, calculated by dividing the previously published 96-h acutely lethal concentration (96-h ALC50; 700 mg/L) by the MATC was 7.3. Toxicity values were compared to a four-tiered exposure assessment profile assuming an application rate of 1.12 kg/ha. The Tier 1 exposure estimation, based on direct overspray of a 2-m deep pond, was 0.055 mg/L. The Tier 2 maximum exposure estimate, based on the Generic Exposure Estimate Concentration model (GEENEC), was 0.057 mg/L. The Tier 3 maximum exposure estimate, based on previously published results of the Groundwater Loading Effects of Agricultural Management Systems model (GLEAMS), was 0.073 mg/L. The Tier 4 exposure estimate, based on published edge-of-field monitoring data, was estimated at 0.008 mg/L. Comparison of toxicity data to estimated environmental concentrations of clopyralid indicates that the safety factor for rainbow trout exposed to clopyralid at labeled use rates exceeds 1000. Therefore, the herbicide presents little to no risk to rainbow trout or other salmonids such as the threatened bull trout. ?? 2009 US Government.
ERIC Educational Resources Information Center
White, Margaret
2010-01-01
In March of each year, the ministry publishes the Operating Grants Manual showing estimated funding allocations for school districts for the upcoming school year. These estimates are based on enrolment projections. On September 30 of the new school year, enrolment is counted and the grants are recalculated based on actual enrolment. The ministry…
Effects of linking a soil-water-balance model with a groundwater-flow model
Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.
2013-01-01
A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.
Height intercept for estimating site index in young ponderosa pine plantations and natural stands
William W. Oliver
1972-01-01
Site index is difficult to estimate with any reliability in ponderosa pine (Pinus ponderosa Laws.) stands below 20 yeas old. A method of estimating site index based on 4-year height intercepts (total length of the first four internodes above breast height) is described. Equations based on two sets of published site-index curves were developed. They...
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Ferro, Ana; Morais, Samantha; Rota, Matteo; Pelucchi, Claudio; Bertuccio, Paola; Bonzi, Rossella; Galeone, Carlotta; Zhang, Zuo-Feng; Matsuo, Keitaro; Ito, Hidemi; Hu, Jinfu; Johnson, Kenneth C; Yu, Guo-Pei; Palli, Domenico; Ferraroni, Monica; Muscat, Joshua; Malekzadeh, Reza; Ye, Weimin; Song, Huan; Zaridze, David; Maximovitch, Dmitry; Fernández de Larrea, Nerea; Kogevinas, Manolis; Vioque, Jesus; Navarrete-Muñoz, Eva M; Pakseresht, Mohammadreza; Pourfarzi, Farhad; Wolk, Alicja; Orsini, Nicola; Bellavia, Andrea; Håkansson, Niclas; Mu, Lina; Pastorino, Roberta; Kurtz, Robert C; Derakhshan, Mohammad H; Lagiou, Areti; Lagiou, Pagona; Boffetta, Paolo; Boccia, Stefania; Negri, Eva; La Vecchia, Carlo; Peleteiro, Bárbara; Lunet, Nuno
2018-05-01
Individual participant data pooled analyses allow access to non-published data and statistical reanalyses based on more homogeneous criteria than meta-analyses based on systematic reviews. We quantified the impact of publication-related biases and heterogeneity in data analysis and presentation in summary estimates of the association between alcohol drinking and gastric cancer. We compared estimates obtained from conventional meta-analyses, using only data available in published reports from studies that take part in the Stomach Cancer Pooling (StoP) Project, with individual participant data pooled analyses including the same studies. A total of 22 studies from the StoP Project assessed the relation between alcohol intake and gastric cancer, 19 had specific data for levels of consumption and 18 according to cancer location; published reports addressing these associations were available from 18, 5 and 5 studies, respectively. The summary odds ratios [OR, (95%CI)] estimate obtained with published data for drinkers vs. non-drinkers was 10% higher than the one obtained with individual StoP data [18 vs. 22 studies: 1.21 (1.07-1.36) vs. 1.10 (0.99-1.23)] and more heterogeneous (I 2 : 63.6% vs 54.4%). In general, published data yielded less precise summary estimates (standard errors up to 2.6 times higher). Funnel plot analysis suggested publication bias. Meta-analyses of the association between alcohol drinking and gastric cancer tended to overestimate the magnitude of the effects, possibly due to publication bias. Additionally, individual participant data pooled analyses yielded more precise estimates for different levels of exposure or cancer subtypes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Halder, Amal K.; Streatfield, Peter K.; Sazzad, Hossain M.S.; Nurul Huda, Tarique M.; Hossain, M. Jahangir; Luby, Stephen P.
2012-01-01
Objectives. We estimated the population-based incidence of maternal and neonatal mortality associated with hepatitis E virus (HEV) in Bangladesh. Methods. We analyzed verbal autopsy data from 4 population-based studies in Bangladesh to calculate the maternal and neonatal mortality ratios associated with jaundice during pregnancy. We then reviewed the published literature to estimate the proportion of maternal deaths associated with liver disease during pregnancy that were the result of HEV in hospitals. Results. We found that 19% to 25% of all maternal deaths and 7% to 13% of all neonatal deaths in Bangladesh were associated with jaundice in pregnant women. In the published literature, 58% of deaths in pregnant women with acute liver disease in hospitals were associated with HEV. Conclusions. Jaundice is frequently associated with maternal and neonatal deaths in Bangladesh, and the published literature suggests that HEV may cause many of these deaths. HEV is preventable, and studies to estimate the burden of HEV in endemic countries are urgently needed. PMID:23078501
Grey literature in meta-analyses.
Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J
2003-01-01
In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.
Global sulfur emissions from 1850 to 2000.
Stern, David I
2005-01-01
The ASL database provides continuous time-series of sulfur emissions for most countries in the World from 1850 to 1990, but academic and official estimates for the 1990s either do not cover all years or countries. This paper develops continuous time series of sulfur emissions by country for the period 1850-2000 with a particular focus on developments in the 1990s. Global estimates for 1996-2000 are the first that are based on actual observed data. Raw estimates are obtained in two ways. For countries and years with existing published data I compile and integrate that data. Previously published data covers the majority of emissions and almost all countries have published emissions for at least 1995. For the remaining countries and for missing years for countries with some published data, I interpolate or extrapolate estimates using either an econometric emissions frontier model, an environmental Kuznets curve model, or a simple extrapolation, depending on the availability of data. Finally, I discuss the main movements in global and regional emissions in the 1990s and earlier decades and compare the results to other studies. Global emissions peaked in 1989 and declined rapidly thereafter. The locus of emissions shifted towards East and South Asia, but even this region peaked in 1996. My estimates for the 1990s show a much more rapid decline than other global studies, reflecting the view that technological progress in reducing sulfur based pollution has been rapid and is beginning to diffuse worldwide.
Regression model estimation of early season crop proportions: North Dakota, some preliminary results
NASA Technical Reports Server (NTRS)
Lin, K. K. (Principal Investigator)
1982-01-01
To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.
Generalizability of Evidence-Based Assessment Recommendations for Pediatric Bipolar Disorder
Jenkins, Melissa M.; Youngstrom, Eric A.; Youngstrom, Jennifer Kogos; Feeny, Norah C.; Findling, Robert L.
2013-01-01
Bipolar disorder is frequently clinically diagnosed in youths who do not actually satisfy DSM-IV criteria, yet cases that would satisfy full DSM-IV criteria are often undetected clinically. Evidence-based assessment methods that incorporate Bayesian reasoning have demonstrated improved diagnostic accuracy, and consistency; however, their clinical utility is largely unexplored. The present study examines the effectiveness of promising evidence-based decision-making compared to the clinical gold standard. Participants were 562 youth, ages 5-17 and predominantly African American, drawn from a community mental health clinic. Research diagnoses combined semi-structured interview with youths’ psychiatric, developmental, and family mental health histories. Independent Bayesian estimates relied on published risk estimates from other samples discriminated bipolar diagnoses, Area Under Curve=.75, p<.00005. The Bayes and confidence ratings correlated rs =.30. Agreement about an evidence-based assessment intervention “threshold model” (wait/assess/treat) had K=.24, p<.05. No potential moderators of agreement between the Bayesian estimates and confidence ratings, including type of bipolar illness, were significant. Bayesian risk estimates were highly correlated with logistic regression estimates using optimal sample weights, r=.81, p<.0005. Clinical and Bayesian approaches agree in terms of overall concordance and deciding next clinical action, even when Bayesian predictions are based on published estimates from clinically and demographically different samples. Evidence-based assessment methods may be useful in settings that cannot routinely employ gold standard assessments, and they may help decrease rates of overdiagnosis while promoting earlier identification of true cases. PMID:22004538
Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992 - 2009
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Giovinetto, Mario B.
2011-01-01
Published mass balance estimates for the Antarctic Ice Sheet (AIS) lie between approximately +50 to -250 Gt/year for 1992 to 2009, which span a range equivalent to 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar-altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (+28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. Although recent reports of large and accelerating rates of mass loss from GRACE=based studies cite agreement with IOM results, our evaluation does not support that conclusion. We find that the extrapolation used in the published IOM estimates for the 15 % of the periphery for which discharge velocities are not observed gives twice the rate of discharge per unit of associated ice-sheet area than the 85% faster-moving parts. Our calculations show that the published extrapolation overestimates the ice discharge by 282 Gt/yr compared to our assumption that the slower moving areas have 70% as much discharge per area as the faster moving parts. Also, published data on the time-series of discharge velocities and accumulation/precipitation do not support mass output increases or input decreases with time, respectively. Our modified IOM estimate, using the 70% discharge assumption and substituting input from a field-data compilation for input from an atmospheric model over 6% of area, gives a loss of only 13 Gt/year (versus 136 Gt/year) for the period around 2000. Two ERS-based estimates, our modified IOM, and a GRACE-based estimate for observations within 1992 to 2005 lie in a narrowed range of +27 to - 40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992-2001 is - 47 Gt/year for West Antarctica, + 16 Gt/year for East Antarctica, and -31 Gt/year overall (+0.1 mm/year SLE), not including part of the Antarctic Peninsula (1.07 % of the AIS area)
Modeling global mangrove soil carbon stocks: filling the gaps in coastal environments
NASA Astrophysics Data System (ADS)
Rovai, A.; Twilley, R.
2017-12-01
We provide an overview of contemporaneous global mangrove soil organic carbon (SOC) estimates, focusing on a framework to explain disproportionate differences among observed data as a way to improve global estimates. This framework is based on a former conceptual model, the coastal environmental setting, in contrast to the more popular latitude-based hypotheses largely believed to explain hemispheric variation in mangrove ecosystem properties. To demonstrate how local and regional estimates of SOC linked to coastal environmental settings can render more realistic global mangrove SOC extrapolations we combined published and unpublished data, yielding a total of 106 studies, reporting on 552 sites from 43 countries. These sites were classified into distinct coastal environmental setting types according to two concurrent worldwide typology of nearshore coastal systems classifications. Mangrove SOC density varied substantially across coastal environmental settings, ranging from 14.9 ± 0.8 in river dominated (deltaic) soils to 53.9 ± 1.6 mg cm-3 (mean ± SE) in karstic coastlines. Our findings reveal striking differences between published values and contemporary global mangrove SOC extrapolation based on country-level mean reference values, particularly for karstic-dominated coastlines where mangrove SOC stocks have been underestimated by up to 50%. Correspondingly, climate-based global estimates predicted lower mangrove SOC density values (32-41 mg C cm-3) for mangroves in karstic environments, differing from published (21-126 mg C cm-3) and unpublished (47-58 mg C cm-3) values. Moreover, climate-based projections yielded higher SOC density values (27-70 mg C cm-3) for river-dominated mangroves compared to lower ranges reported in the literature (11-24 mg C cm-3). We argue that this inconsistent reporting of SOC stock estimates between river-dominated and karstic coastal environmental settings is likely due to the omission of geomorphological and geophysical environmental drivers, which control C storage in coastal wetlands. We encourage the science community more close utilize coastal environmental settings and new inventories of geomorphological typologies to build more robust estimates of local and regional estimates of SOC that can be extrapolated to global C estimates.
Mogasale, Vittal; Mogasale, Vijayalaxmi V; Ramani, Enusa; Lee, Jung Seok; Park, Ju Yeon; Lee, Kang Sung; Wierzba, Thomas F
2016-01-29
The control of typhoid fever being an important public health concern in low and middle income countries, improving typhoid surveillance will help in planning and implementing typhoid control activities such as deployment of new generation Vi conjugate typhoid vaccines. We conducted a systematic literature review of longitudinal population-based blood culture-confirmed typhoid fever studies from low and middle income countries published from 1(st) January 1990 to 31(st) December 2013. We quantitatively summarized typhoid fever incidence rates and qualitatively reviewed study methodology that could have influenced rate estimates. We used meta-analysis approach based on random effects model in summarizing the hospitalization rates. Twenty-two papers presented longitudinal population-based and blood culture-confirmed typhoid fever incidence estimates from 20 distinct sites in low and middle income countries. The reported incidence and hospitalizations rates were heterogeneous as well as the study methodology across the sites. We elucidated how the incidence rates were underestimated in published studies. We summarized six categories of under-estimation biases observed in these studies and presented potential solutions. Published longitudinal typhoid fever studies in low and middle income countries are geographically clustered and the methodology employed has a potential for underestimation. Future studies should account for these limitations.
Afzali, Anita; Ogden, Kristine; Friedman, Michael L; Chao, Jingdong; Wang, Anthony
2017-04-01
Inflammatory bowel disease (IBD) (e.g. ulcerative colitis [UC] and Crohn's disease [CD]) severely impacts patient quality-of-life. Moderate-to-severe disease is often treated with biologics requiring infusion therapy, adding incremental costs beyond drug costs. This study evaluates US hospital-based infusion services costs for treatment of UC or CD patients receiving infliximab or vedolizumab therapy. A model was developed, estimating annual costs of providing monitored infusions using an activity-based costing framework approach. Multiple sources (published literature, treatment product inserts) informed base-case model input estimates. The total modeled per patient infusion therapy costs in Year 1 with infliximab and vedolizumab was $38,782 and $41,320, respectively, and Year 2+, $49,897 and $36,197, respectively. Drug acquisition cost was the largest total costs driver (90-93%), followed by costs associated with hospital-based infusion provision: labor (53-56%, non-drug costs), allocated overhead (23%, non-drug costs), non-labor (23%, non-drug costs), and laboratory (7-10%, non-drug costs). Limitations included reliance on published estimates, base-case cost estimates infusion drug, and supplies, not accounting for volume pricing, assumption of a small hospital infusion center, and that, given the model adopts the hospital perspective, costs to the patient were not included in infusion administration cost base-case estimates. This model is an early step towards a framework to fully analyze infusion therapies' associated costs. Given the lack of published data, it would be beneficial for hospital administrators to assess total costs and trade-offs with alternative means of providing biologic therapies. This analysis highlights the value to hospital administrators of assessing cost associated with infusion patient mix to make more informed resource allocation decisions. As the landscape for reimbursement changes, tools for evaluating the costs of infusion therapy may help hospital administrators make informed choices and weigh trade-offs associated with providing infusion services for IBD patients.
Griesenbeck, John S; Steck, Michelle D; Huber, John C; Sharkey, Joseph R; Rene, Antonio A; Brender, Jean D
2009-04-06
Studies have suggested that nitrates, nitrites, and nitrosamines have an etiologic role in adverse pregnancy outcomes and chronic diseases such as cancer. Although an extensive body of literature exists on estimates of these compounds in foods, the extant data varies in quality, quantified estimates, and relevance. We developed estimates of nitrates, nitrites, and nitrosamines for food items listed in the Short Willet Food Frequency Questionnaire (WFFQ) as adapted for use in the National Birth Defects Prevention Study. Multiple reference databases were searched for published literature reflecting nitrate, nitrite, and nitrosamine values in foods. Relevant published literature was reviewed; only publications reporting results for items listed on the WFFQ were selected for inclusion. The references selected were prioritized according to relevance to the U.S. population. Based on our estimates, vegetable products contain the highest levels of nitrate, contributing as much as 189 mg/serving. Meat and bean products contain the highest levels of nitrites with values up to 1.84 mg/serving. Alcohol, meat and dairy products contain the highest values of nitrosamines with a maximum value of 0.531 microg/serving. The estimates of dietary nitrates, nitrites, and nitrosamines generated in this study are based on the published values currently available. To our knowledge, these are the only estimates specifically designed for use with the adapted WFFQ and generated to represent food items available to the U.S. population. The estimates provided may be useful in other research studies, specifically in those exploring the relation between exposure to these compounds in foods and adverse health outcomes.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
David C. Chojnacky; Jennifer C. Jenkins; Amanda K. Holland
2009-01-01
Thousands of published equations purport to estimate biomass of individual trees. These equations are often based on very small samples, however, and can provide widely different estimates for trees of the same species. We addressed this issue in a previous study by devising 10 new equations that estimated total aboveground biomass for all species in North America (...
Journal: A Review of Some Tracer-Test Design Equations for ...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-
Antoniou, A; Pharoah, P; Narod, S; Risch, H; Eyfjord, J; Hopper, J; Olsson, H; Johannsson, O; Borg, A; Pasini, B; Radice, P; Manoukian, S; Eccles, D; Tang, N; Olah, E; Anton-Culver, H; Warner, E; Lubinski, J; Gronwald, J; Gorski, B; Tulinius, H; Thorlacius, S; Eerola, H; Nevanlinna, H; Syrjakoski, K; Kallioniemi, O; Thompson, D; Evans, C; Peto, J; Lalloo, F; Evans, D; Easton, D
2005-01-01
A recent report estimated the breast cancer risks in carriers of the three Ashkenazi founder mutations to be higher than previously published estimates derived from population based studies. In an attempt to confirm this, the breast and ovarian cancer risks associated with the three Ashkenazi founder mutations were estimated using families included in a previous meta-analysis of populatrion based studies. The estimated breast cancer risks for each of the founder BRCA1 and BRCA2 mutations were similar to the corresponding estimates based on all BRCA1 or BRCA2 mutations in the meta-analysis. These estimates appear to be consistent with the observed prevalence of the mutations in the Ashkenazi Jewish population. PMID:15994883
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Griesenbeck, John S; Steck, Michelle D; Huber, John C; Sharkey, Joseph R; Rene, Antonio A; Brender, Jean D
2009-01-01
Background Studies have suggested that nitrates, nitrites, and nitrosamines have an etiologic role in adverse pregnancy outcomes and chronic diseases such as cancer. Although an extensive body of literature exists on estimates of these compounds in foods, the extant data varies in quality, quantified estimates, and relevance. Methods We developed estimates of nitrates, nitrites, and nitrosamines for food items listed in the Short Willet Food Frequency Questionnaire (WFFQ) as adapted for use in the National Birth Defects Prevention Study. Multiple reference databases were searched for published literature reflecting nitrate, nitrite, and nitrosamine values in foods. Relevant published literature was reviewed; only publications reporting results for items listed on the WFFQ were selected for inclusion. The references selected were prioritized according to relevance to the U.S. population. Results Based on our estimates, vegetable products contain the highest levels of nitrate, contributing as much as 189 mg/serving. Meat and bean products contain the highest levels of nitrites with values up to 1.84 mg/serving. Alcohol, meat and dairy products contain the highest values of nitrosamines with a maximum value of 0.531 μg/serving. The estimates of dietary nitrates, nitrites, and nitrosamines generated in this study are based on the published values currently available. Conclusion To our knowledge, these are the only estimates specifically designed for use with the adapted WFFQ and generated to represent food items available to the U.S. population. The estimates provided may be useful in other research studies, specifically in those exploring the relation between exposure to these compounds in foods and adverse health outcomes. PMID:19348679
Density estimates of monarch butterflies overwintering in central Mexico
Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031
Density estimates of monarch butterflies overwintering in central Mexico
Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
Asteroid mass estimation using Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-11-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
A Policy Impact Analysis of the Mandatory NCAA Sickle Cell Trait Screening Program
Tarini, Beth A; Brooks, Margaret Alison; Bundy, David G
2012-01-01
Objective To estimate the impact of the mandatory National Collegiate Athletic Association (NCAA) sickle cell trait (SCT) screening policy on the identification of sickle cell carriers and prevention of sudden death. Data Source We used NCAA reports, population-based SCT prevalence estimates, and published risks for exercise-related sudden death attributable to SCT. Study Design We estimated the number of sickle cell carriers identified and the number of potentially preventable sudden deaths with mandatory SCT screening of NCAA Division I athletes. We calculated the number of student-athletes with SCT using a conditional probability based upon SCT prevalence data and self-identified race/ethnicity status. We estimated sudden deaths over 10 years based on published attributable risk of exercise-related sudden death due to SCT. Principal Findings We estimate that over 2,000 NCAA Division I student-athletes with SCT will be identified under this screening policy and that, without intervention, about seven NCAA Division I student-athletes would die suddenly as a complication of SCT over a 10-year period. Conclusion Universal sickle cell screening of NCAA Division I student-athletes will identify a substantial number of sickle cell carriers. A successful intervention could prevent about seven deaths over a decade. PMID:22150647
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Joint; Groom
2000-07-30
A new generation of ocean colour satellites is now operational, with frequent observation of the global ocean. This paper reviews the potential to estimate marine primary production from satellite images. The procedures involved in retrieving estimates of phytoplankton biomass, as pigment concentrations, are discussed. Algorithms are applied to SeaWiFS ocean colour data to indicate seasonal variations in phytoplankton biomass in the Celtic Sea, on the continental shelf to the south west of the UK. Algorithms to estimate primary production rates from chlorophyll concentration are compared and the advantages and disadvantage discussed. The simplest algorithms utilise correlations between chlorophyll concentration and production rate and one equation is used to estimate daily primary production rates for the western English Channel and Celtic Sea; these estimates compare favourably with published values. Primary production for the central Celtic Sea in the period April to September inclusive is estimated from SeaWiFS data to be 102 gC m(-2) in 1998 and 93 gC m(-2) in 1999; published estimates, based on in situ incubations, are ca. 80 gC m(-2). The satellite data demonstrate large variations in primary production between 1998 and 1999, with a significant increase in late summer in 1998 which did not occur in 1999. Errors are quantified for the estimation of primary production from simple algorithms based on satellite-derived chlorophyll concentration. These data show the potential to obtain better estimates of marine primary production than are possible with ship-based methods, with the ability to detect short-lived phytoplankton blooms. In addition, the potential to estimate new production from satellite data is discussed.
improve transition, they will be different from the ones published on the PFDS around volumes' boundaries . 2.7 Why do I see inconsistencies in some NOAA Atlas 14 estimates at boundaries of different NOAA Atlas different times in volumes based on state boundaries, some differences in estimates between volumes at
Decision-Making Accuracy of CBM Progress-Monitoring Data
ERIC Educational Resources Information Center
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
DOT National Transportation Integrated Search
2015-01-01
Traditionally, the Iowa Department of Transportation : has used the Iowa Runoff Chart and single-variable regional-regression equations (RREs) from a U.S. Geological Survey : report (published in 1987) as the primary methods to estimate : annual exce...
DOT National Transportation Integrated Search
2015-01-01
Traditionally, the Iowa DOT has used the Iowa Runoff Chart and single-variable regional regression equations (RREs) from a USGS report : (published in 1987) as the primary methods to estimate annual exceedance-probability discharge : (AEPD) for small...
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vergino, Eileen S.
Soviet seismologists have published descriptions of 96 nuclear explosions conducted from 1961 through 1972 at the Semipalatinsk test site, in Kazakhstan, central Asia [Bocharov et al., 1989]. With the exception of releasing news about some of their peaceful nuclear explosions (PNEs) the Soviets have never before published such a body of information.To estimate the seismic yield of a nuclear explosion it is necessary to obtain a calibrated magnitude-yield relationship based on events with known yields and with a consistent set of seismic magnitudes. U.S. estimation of Soviet test yields has been done through application of relationships to the Soviet sites based on the U.S. experience at the Nevada Test Site (NTS), making some correction for differences due to attenuation and near-source coupling of seismic waves.
Grewal, Jagteshwar; Zhang, Jun; Mikolajczyk, Rafael T; Ford, Jessie
2010-08-01
Estimates of gestational age based on early second-trimester ultrasound often differ from that based on the last menstrual period (LMP) even when a woman is certain about her LMP. Discrepancies in these gestational age estimates may be associated with an increased risk of cesarean section and low birth weight. We analyzed 7228 singleton, low-risk, white women from The Routine Antenatal Diagnostic Imaging with Ultrasound trial. The women were recruited at less than 14 weeks of gestation and received ultrasound exams between 15 and 22 weeks. Our results indicate that among nulliparous women, the risk of cesarean section increased from 10% when the ultrasound-based gestational age exceeded the LMP-based estimate by 4 days to 60% when the discrepancy increased to 21 days. Moreover, for each additional day the ultrasound-based estimate exceeded the LMP-based estimate, birth weight was higher by 9.6 g. Our findings indicate that a positive discrepancy (i.e., ultrasound-based estimate exceeds LMP-based estimate) in gestational age is associated with an increased risk of cesarean section. A negative discrepancy, by contrast, may reflect early intrauterine growth restriction and an increased risk of low birth weight. Copyright Thieme Medical Publishers.
Several key issues on using 137Cs method for soil erosion estimation
USDA-ARS?s Scientific Manuscript database
This work was to examine several key issues of using the cesium-137 method to estimate soil erosion rates in order to improve and standardize the method. Based on the comprehensive review and synthesis of a large body of published literature and the author’s extensive research experience, several k...
11 CFR 110.18 - Voting age population.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 11 Federal Elections 1 2011-01-01 2011-01-01 false Voting age population. 110.18 Section 110.18... PROHIBITIONS § 110.18 Voting age population. There is annually published by the Department of Commerce in the Federal Register an estimate of the voting age population based on an estimate of the voting age...
11 CFR 110.18 - Voting age population.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 11 Federal Elections 1 2013-01-01 2012-01-01 true Voting age population. 110.18 Section 110.18... PROHIBITIONS § 110.18 Voting age population. There is annually published by the Department of Commerce in the Federal Register an estimate of the voting age population based on an estimate of the voting age...
11 CFR 110.18 - Voting age population.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 11 Federal Elections 1 2014-01-01 2014-01-01 false Voting age population. 110.18 Section 110.18... PROHIBITIONS § 110.18 Voting age population. There is annually published by the Department of Commerce in the Federal Register an estimate of the voting age population based on an estimate of the voting age...
11 CFR 110.18 - Voting age population.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 11 Federal Elections 1 2012-01-01 2012-01-01 false Voting age population. 110.18 Section 110.18... PROHIBITIONS § 110.18 Voting age population. There is annually published by the Department of Commerce in the Federal Register an estimate of the voting age population based on an estimate of the voting age...
QFASAR: Quantitative fatty acid signature analysis with R
Bromaghin, Jeffrey F.
2017-01-01
Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.
Li, Jinhui; Ji, Yifei; Zhang, Yongsheng; Zhang, Qilei; Huang, Haifeng; Dong, Zhen
2018-04-10
Spaceborne synthetic aperture radar (SAR) missions operating at low frequencies, such as L-band or P-band, are significantly influenced by the ionosphere. As one of the serious ionosphere effects, Faraday rotation (FR) is a remarkable distortion source for the polarimetric SAR (PolSAR) application. Various published FR estimators along with an improved one have been introduced to solve this issue, all of which are implemented by processing a set of PolSAR real data. The improved estimator exhibits optimal robustness based on performance analysis, especially in term of the system noise. However, all published estimators, including the improved estimator, suffer from a potential FR angle (FRA) ambiguity. A novel strategy of the ambiguity correction for those FR estimators is proposed and shown as a flow process, which is divided into pixel-level and image-level correction. The former is not yet recognized and thus is considered in particular. Finally, the validation experiments show a prominent performance of the proposed strategy.
Stature estimation equations for South Asian skeletons based on DXA scans of contemporary adults.
Pomeroy, Emma; Mushrif-Tripathy, Veena; Wells, Jonathan C K; Kulkarni, Bharati; Kinra, Sanjay; Stock, Jay T
2018-05-03
Stature estimation from the skeleton is a classic anthropological problem, and recent years have seen the proliferation of population-specific regression equations. Many rely on the anatomical reconstruction of stature from archaeological skeletons to derive regression equations based on long bone lengths, but this requires a collection with very good preservation. In some regions, for example, South Asia, typical environmental conditions preclude the sufficient preservation of skeletal remains. Large-scale epidemiological studies that include medical imaging of the skeleton by techniques such as dual-energy X-ray absorptiometry (DXA) offer new potential datasets for developing such equations. We derived estimation equations based on known height and bone lengths measured from DXA scans from the Andhra Pradesh Children and Parents Study (Hyderabad, India). Given debates on the most appropriate regression model to use, multiple methods were compared, and the performance of the equations was tested on a published skeletal dataset of individuals with known stature. The equations have standard errors of estimates and prediction errors similar to those derived using anatomical reconstruction or from cadaveric datasets. As measured by the number of significant differences between true and estimated stature, and the prediction errors, the new equations perform as well as, and generally better than, published equations commonly used on South Asian skeletons or based on Indian cadaveric datasets. This study demonstrates the utility of DXA scans as a data source for developing stature estimation equations and offer a new set of equations for use with South Asian datasets. © 2018 Wiley Periodicals, Inc.
David. C. Chojnacky
2012-01-01
An update of the Jenkins et al. (2003) biomass estimation equations for North American tree species resulted in 35 generalized equations developed from published equations. These 35 equations, which predict aboveground biomass of individual species grouped according to a taxa classification (based on genus or family and sometimes specific gravity), generally predicted...
Moreau, Marjory; Leonard, Jeremy; Phillips, Katherine A; Campbell, Jerry; Pendse, Salil N; Nicolas, Chantel; Phillips, Martin; Yoon, Miyoung; Tan, Yu-Mei; Smith, Sherrie; Pudukodu, Harish; Isaacs, Kristin; Clewell, Harvey
2017-10-01
A few different exposure prediction tools were evaluated for use in the new in vitro-based safety assessment paradigm using di-2-ethylhexyl phthalate (DEHP) and dibutyl phthalate (DnBP) as case compounds. Daily intake of each phthalate was estimated using both high-throughput (HT) prediction models such as the HT Stochastic Human Exposure and Dose Simulation model (SHEDS-HT) and the ExpoCast heuristic model and non-HT approaches based on chemical specific exposure estimations in the environment in conjunction with human exposure factors. Reverse dosimetry was performed using a published physiologically based pharmacokinetic (PBPK) model for phthalates and their metabolites to provide a comparison point. Daily intakes of DEHP and DnBP were estimated based on the urinary concentrations of their respective monoesters, mono-2-ethylhexyl phthalate (MEHP) and monobutyl phthalate (MnBP), reported in NHANES (2011-2012). The PBPK-reverse dosimetry estimated daily intakes at the 50th and 95th percentiles were 0.68 and 9.58 μg/kg/d and 0.089 and 0.68 μg/kg/d for DEHP and DnBP, respectively. For DEHP, the estimated median from PBPK-reverse dosimetry was about 3.6-fold higher than the ExpoCast estimate (0.68 and 0.18 μg/kg/d, respectively). For DnBP, the estimated median was similar to that predicted by ExpoCast (0.089 and 0.094 μg/kg/d, respectively). The SHEDS-HT prediction of DnBP intake from consumer product pathways alone was higher at 0.67 μg/kg/d. The PBPK-reverse dosimetry-estimated median intake of DEHP and DnBP was comparable to values previously reported for US populations. These comparisons provide insights into establishing criteria for selecting appropriate exposure prediction tools for use in an integrated modeling platform to link exposure to health effects. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Evaluation of the field relevance of several injury risk functions.
Prasad, Priya; Mertz, Harold J; Dalmotas, Danius J; Augenstein, Jeffrey S; Diggs, Kennerly
2010-11-01
An evaluation of the four injury risk curves proposed in the NHTSA NCAP for estimating the risk of AIS>= 3 injuries to the head, neck, chest and AIS>=2 injury to the Knee-Thigh-Hip (KTH) complex has been conducted. The predicted injury risk to the four body regions based on driver dummy responses in over 300 frontal NCAP tests were compared against those to drivers involved in real-world crashes of similar severity as represented in the NASS. The results of the study show that the predicted injury risks to the head and chest were slightly below those in NASS, and the predicted risk for the knee-thigh-hip complex was substantially below that observed in the NASS. The predicted risk for the neck by the Nij curve was greater than the observed risk in NASS by an order of magnitude due to the Nij risk curve predicting a non-zero risk when Nij = 0. An alternative and published Nte risk curve produced a risk estimate consistent with the NASS estimate of neck injury. Similarly, an alternative and published chest injury risk curve produced a risk estimate that was within the bounds of the NASS estimates. No published risk curve for femur compressive load could be found that would give risk estimates consistent with the range of the NASS estimates. Additional work on developing a femur compressive load risk curve is recommended.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263
New, national bottom-up estimate for tree-based biological ...
Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating
The internal dosimetry of Rubidium-82 based on dynamic PET/CT imaging in humans
NASA Astrophysics Data System (ADS)
Hunter, Chad R.
Rubidium-82 (Rb-82) is a useful blood flow tracer, and has become important in recent years due to the shutdown of the Chalk River reactor. Published effective dose estimates for Rb-82 vary widely, and as yet no comprehensive study in man has been conducted with PET/CT, and no effective dose estimates for Rb-82 during pharmacological stress testing has been published. 30 subjects were recruited for rest, and 25 subjects were recruited for stress. The subjects consisted of both cardiac patients and normal subjects. For rest, a total of 283 organs were measured across 60 scans. For stress, a total of 171 organs were measured across 25 scans. Effective dose estimates were calculated using the ICRP 60, 80, and 103 tissue weighting factors. Relative differences between this study and the published in-vivo estimates showed agreement for the lungs. Relative differences between this study and the blood flow models showed differences> 5 times in the thyroid contribution to the effective dose demonstrating a limitation in these models. Comparisons between rest and stress effective dose estimates revealed no significant difference. The average 'adult' effective dose for Rb-82 was found to be 0.00084+/-0.00018 mSv/MBq. The highest dose organs were the lungs, kidneys and stomach wall. These dose estimates for Rb-82 are the first to be measured directly with PET/CT in humans, and are 4 times lower than previous ICRP 60 values based on a theoretical blood flow model. The total adult effective dose from a typical Rb-82 study including CT for attenuation correction and potential Sr-85 breakthrough is 1.5 +/- 0.4 mSv.
Perry, Jonathan M G; Cooke, Siobhán B; Runestad Connour, Jacqueline A; Burgess, M Loring; Ruff, Christopher B
2018-02-01
Body mass is an important component of any paleobiological reconstruction. Reliable skeletal dimensions for making estimates are desirable but extant primate reference samples with known body masses are rare. We estimated body mass in a sample of extinct platyrrhines and Fayum anthropoids based on four measurements of the articular surfaces of the humerus and femur. Estimates were based on a large extant reference sample of wild-collected individuals with associated body masses, including previously published and new data from extant platyrrhines, cercopithecoids, and hominoids. In general, scaling of joint dimensions is positively allometric relative to expectations of geometric isometry, but negatively allometric relative to expectations of maintaining equivalent joint surface areas. Body mass prediction equations based on articular breadths are reasonably precise, with %SEEs of 17-25%. The breadth of the distal femoral articulation yields the most reliable estimates of body mass because it scales similarly in all major anthropoid taxa. Other joints scale differently in different taxa; therefore, locomotor style and phylogenetic affinity must be considered when calculating body mass estimates from the proximal femur, proximal humerus, and distal humerus. The body mass prediction equations were applied to 36 Old World and New World fossil anthropoid specimens representing 11 taxa, plus two Haitian specimens of uncertain taxonomic affinity. Among the extinct platyrrhines studied, only Cebupithecia is similar to large, extant platyrrhines in having large humeral (especially distal) joints. Our body mass estimates differ from each other and from published estimates based on teeth in ways that reflect known differences in relative sizes of the joints and teeth. We prefer body mass estimators that are biomechanically linked to weight-bearing, and especially those that are relatively insensitive to differences in locomotor style and phylogenetic history. Whenever possible, extant reference samples should be chosen to match target fossils in joint proportionality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Global estimate of the incidence of clinical pneumonia among children under five years of age.
Rudan, Igor; Tomaskovic, Lana; Boschi-Pinto, Cynthia; Campbell, Harry
2004-01-01
OBJECTIVE: Clinical pneumonia (defined as respiratory infections associated with clinical signs of pneumonia, principally pneumonia and bronchiolitis) in children under five years of age is still the leading cause of childhood mortality in the world. In this paper we aim to estimate the worldwide incidence of clinical pneumonia in young children. METHODS: Our estimate for the developing world is based on an analysis of published data on the incidence of clinical pneumonia from community based longitudinal studies. Among more than 2000 studies published since 1961, we identified 46 studies that reported the incidence of clinical pneumonia, and 28 of these met pre-defined quality criteria. FINDINGS: The estimate of the median incidence from those studies was 0.28 episodes per child-year (e/cy). The 25-75% interquartile range was 0.21-0.71. We assessed the plausibility of this estimate using estimates of global mortality from acute respiratory infections and reported case fatality rates for all episodes of clinical pneumonia reported in community-based studies or the case-fatality rate reported only for severe cases and estimates of the proportion of severe cases occurring in a defined population or community. CONCLUSION: The overlap between the ranges of the estimates implies that a plausible incidence estimate of clinical pneumonia for developing countries is 0.29 e/cy. This equates to an annual incidence of 150.7 million new cases, 11-20 million (7-13%) of which are severe enough to require hospital admission. In the developed world no comparable data are available. However, large population-based studies report that the incidence of community-acquired pneumonia among children less than five years old is approximately 0.026 e/cy, suggesting that more than 95% of all episodes of clinical pneumonia in young children worldwide occur in developing countries. PMID:15654403
Boehler, Christian E H; Lord, Joanne
2016-01-01
Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%-19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical question and may be atypical. © The Author(s) 2015.
Bruise chromophore concentrations over time
NASA Astrophysics Data System (ADS)
Duckworth, Mark G.; Caspall, Jayme J.; Mappus, Rudolph L., IV; Kong, Linghua; Yi, Dingrong; Sprigle, Stephen H.
2008-03-01
During investigations of potential child and elder abuse, clinicians and forensic practitioners are often asked to offer opinions about the age of a bruise. A commonality between existing methods of bruise aging is analysis of bruise color or estimation of chromophore concentration. Relative chromophore concentration is an underlying factor that determines bruise color. We investigate a method of chromophore concentration estimation that can be employed in a handheld imaging spectrometer with a small number of wavelengths. The method, based on absorbance properties defined by Beer-Lambert's law, allows estimation of differential chromophore concentration between bruised and normal skin. Absorption coefficient data for each chromophore are required to make the estimation. Two different sources of this data are used in the analysis- generated using Independent Component Analysis and taken from published values. Differential concentration values over time, generated using both sources, show correlation to published models of bruise color change over time and total chromophore concentration over time.
First Nuclear DNA Amounts in more than 300 Angiosperms
ZONNEVELD, B. J. M.; LEITCH, I. J.; BENNETT, M. D.
2005-01-01
• Background and Aims Genome size (DNA C-value) data are key biodiversity characters of fundamental significance used in a wide variety of biological fields. Since 1976, Bennett and colleagues have made scattered published and unpublished genome size data more widely accessible by assembling them into user-friendly compilations. Initially these were published as hard copy lists, but since 1997 they have also been made available electronically (see the Plant DNA C-values database www.kew.org/cval/homepage.html). Nevertheless, at the Second Plant Genome Size Meeting in 2003, Bennett noted that as many as 1000 DNA C-value estimates were still unpublished and hence unavailable. Scientists were strongly encouraged to communicate such unpublished data. The present work combines the databasing experience of the Kew-based authors with the unpublished C-values produced by Zonneveld to make a large body of valuable genome size data available to the scientific community. • Methods C-values for angiosperm species, selected primarily for their horticultural interest, were estimated by flow cytometry using the fluorochrome propidium iodide. The data were compiled into a table whose form is similar to previously published lists of DNA amounts by Bennett and colleagues. • Key Results and Conclusions The present work contains C-values for 411 taxa including first values for 308 species not listed previously by Bennett and colleagues. Based on a recent estimate of the global published output of angiosperm DNA C-value data (i.e. 200 first C-value estimates per annum) the present work equals 1·5 years of average global published output; and constitutes over 12 % of the latest 5-year global target set by the Second Plant Genome Size Workshop (see www.kew.org/cval/workshopreport.html). Hopefully, the present example will encourage others to unveil further valuable data which otherwise may lie forever unpublished and unavailable for comparative analyses. PMID:15905300
Yi, Qitao; Li, Hui; Lee, Jin-Woo; Kim, Youngchul
2015-09-01
The Publisher regrets that this article is an accidental duplication of an article that has already been published in Desalination Water Treat., 27:1-3, 175-188, http://dx.doi.org/10.5004/dwt.2011.2736. The duplicate article has therefore been withdrawn. The full Elsevier Policy on Article Withdrawal can be found at http://www.elsevier.com/locate/withdrawalpolicy. Copyright © 2015. Published by Elsevier B.V.
Inference of pCO2 Levels during the Late Cretaceous Using Fossil Lauraceae
NASA Astrophysics Data System (ADS)
Richey, J. D.; Upchurch, G. R.
2011-12-01
Botanical estimates of pCO2 for the Late Cretaceous have most commonly used Stomatal Index (SI) in fossil Ginkgo. Recently, SI in fossil Lauraceae has been used to infer changes in pCO2 across the Cenomanian-Turonian boundary, based on the relation between SI and pCO2 in extant Laurus and Hypodaphnis. To provide a broad-scale picture of pCO2 based on fossil Lauraceae, we examined dispersed cuticle of the leaf macrofossil genus Pandemophyllum from: 1) the early to middle Cenomanian of the Potomac Group of Maryland (Mauldin Mountain locality, lower Zone III) and 2) the Maastrichtian of southern Colorado (Raton Basin, Starkville South and Berwind Canyon localities). These samples fall within the Late Cretaceous decline in pCO2 inferred from geochemical modeling and other proxies. SI was calculated from fossil cuticle fragments using ImageJ and counts of up to 56,000 cells per sample, a far greater number of cells than are counted in most studies. CO2 levels were estimated using the relation between SI and CO2 published for Laurus nobilis and Hypodaphnis zenkeri. Early to middle Cenomanian atmospheric pCO2 is estimated at 362-536 parts per million (ppm). This represents the absolute minimum and maximum estimated CO2 levels from the ±95% confidence intervals (CI) of the relation between SI and CO2 for the modern equivalents, and SI ± 1 Standard Deviation (SD) in the fossil genus Pandemophyllum. Late Maastrichtian atmospheric pCO2 is estimated at 358-534 ppm. The Maastrichtian estimates falls within the range of published estimates from other proxies. The Cenomanian estimate, in contrast, is low relative to most other estimates. The 95% confidence intervals of our pCO2 estimates overlap each other and many of the assemblages published by Barclay et al. (2010) for Lauraceae across the Cenomanian-Turonian boundary. This could indicate that 1) pCO2 did not undergo a major long-term decline during the Late Cretaceous, 2) Lauraceae show low sensitivity to high pCO2, or 3) additional sampling is necessary to find the mid-Cretaceous pCO2 maximum inferred by other proxy methods.
Estimating the Reliability of the CITAR Computer Courseware Evaluation System.
ERIC Educational Resources Information Center
Micceri, Theodore
In today's complex computer-based teaching (CBT)/computer-assisted instruction market, flashy presentations frequently prove the most important purchasing element, while instructional design and content are secondary to form. Courseware purchasers must base decisions upon either a vendor's presentation or some published evaluator rating.…
Ortendahl, Jesse D; Pulgar, Sonia J; Mirakhur, Beloo; Cox, David; Bentley, Tanya Gk; Phan, Alexandria T
2017-01-01
With the introduction of new therapies, hospitals have to plan spending limited resources in a cost-effective manner. To assist in identifying the optimal treatment for patients with locally advanced or metastatic gastroenteropancreatic neuroendocrine tumors, budget impact modeling was used to estimate the financial implications of adoption and diffusion of somatostatin analogs (SSAs). A hypothetical cohort of 500 gastroenteropancreatic neuroendocrine tumor patients was assessed in an economic model, with the proportion with metastatic disease treated with an SSA estimated using published data. Drug acquisition, preparation, and administration costs were based on national pricing databases and published literature. Octreotide dosing was based on published estimates of real-world data, whereas for lanreotide, real-world dosing was unavailable and we therefore used the highest indicated dosing. Alternative scenarios reflecting the proportion of patients receiving lanreotide or octreotide were considered to estimate the incremental budget impact to the hospital. In the base case, 313 of the initial 500 gastroenteropancreatic neuroendocrine tumor patients were treated with an SSA. The model-predicted per-patient cost was US$83,473 for lanreotide and US$89,673 for octreotide. With a hypothetical increase in lanreotide utilization from 5% to 30% of this population, the annual model-projected hospital costs decreased by US$488,615. When varying the inputs in one-way sensitivity analyses, the results were most sensitive to changes in dosing assumptions. Results suggest that factors beyond drug acquisition cost can influence the budget impact to a hospital. When considering preparation and administration time, and real-world dosing, use of lanreotide has the potential to reduce health care expenditures associated with metastatic gastroenteropancreatic neuroendocrine tumor treatments.
Shi, Ting; McAllister, David A; O'Brien, Katherine L; Simoes, Eric A F; Madhi, Shabir A; Gessner, Bradford D; Polack, Fernando P; Balsells, Evelyn; Acacio, Sozinho; Aguayo, Claudia; Alassani, Issifou; Ali, Asad; Antonio, Martin; Awasthi, Shally; Awori, Juliet O; Azziz-Baumgartner, Eduardo; Baggett, Henry C; Baillie, Vicky L; Balmaseda, Angel; Barahona, Alfredo; Basnet, Sudha; Bassat, Quique; Basualdo, Wilma; Bigogo, Godfrey; Bont, Louis; Breiman, Robert F; Brooks, W Abdullah; Broor, Shobha; Bruce, Nigel; Bruden, Dana; Buchy, Philippe; Campbell, Stuart; Carosone-Link, Phyllis; Chadha, Mandeep; Chipeta, James; Chou, Monidarin; Clara, Wilfrido; Cohen, Cheryl; de Cuellar, Elizabeth; Dang, Duc-Anh; Dash-Yandag, Budragchaagiin; Deloria-Knoll, Maria; Dherani, Mukesh; Eap, Tekchheng; Ebruke, Bernard E; Echavarria, Marcela; de Freitas Lázaro Emediato, Carla Cecília; Fasce, Rodrigo A; Feikin, Daniel R; Feng, Luzhao; Gentile, Angela; Gordon, Aubree; Goswami, Doli; Goyet, Sophie; Groome, Michelle; Halasa, Natasha; Hirve, Siddhivinayak; Homaira, Nusrat; Howie, Stephen R C; Jara, Jorge; Jroundi, Imane; Kartasasmita, Cissy B; Khuri-Bulos, Najwa; Kotloff, Karen L; Krishnan, Anand; Libster, Romina; Lopez, Olga; Lucero, Marilla G; Lucion, Florencia; Lupisan, Socorro P; Marcone, Debora N; McCracken, John P; Mejia, Mario; Moisi, Jennifer C; Montgomery, Joel M; Moore, David P; Moraleda, Cinta; Moyes, Jocelyn; Munywoki, Patrick; Mutyara, Kuswandewi; Nicol, Mark P; Nokes, D James; Nymadawa, Pagbajabyn; da Costa Oliveira, Maria Tereza; Oshitani, Histoshi; Pandey, Nitin; Paranhos-Baccalà, Gláucia; Phillips, Lia N; Picot, Valentina Sanchez; Rahman, Mustafizur; Rakoto-Andrianarivelo, Mala; Rasmussen, Zeba A; Rath, Barbara A; Robinson, Annick; Romero, Candice; Russomando, Graciela; Salimi, Vahid; Sawatwong, Pongpun; Scheltema, Nienke; Schweiger, Brunhilde; Scott, J Anthony G; Seidenberg, Phil; Shen, Kunling; Singleton, Rosalyn; Sotomayor, Viviana; Strand, Tor A; Sutanto, Agustinus; Sylla, Mariam; Tapia, Milagritos D; Thamthitiwat, Somsak; Thomas, Elizabeth D; Tokarz, Rafal; Turner, Claudia; Venter, Marietjie; Waicharoen, Sunthareeya; Wang, Jianwei; Watthanaworawit, Wanitda; Yoshida, Lay-Myint; Yu, Hongjie; Zar, Heather J; Campbell, Harry; Nair, Harish
2017-09-02
We have previously estimated that respiratory syncytial virus (RSV) was associated with 22% of all episodes of (severe) acute lower respiratory infection (ALRI) resulting in 55 000 to 199 000 deaths in children younger than 5 years in 2005. In the past 5 years, major research activity on RSV has yielded substantial new data from developing countries. With a considerably expanded dataset from a large international collaboration, we aimed to estimate the global incidence, hospital admission rate, and mortality from RSV-ALRI episodes in young children in 2015. We estimated the incidence and hospital admission rate of RSV-associated ALRI (RSV-ALRI) in children younger than 5 years stratified by age and World Bank income regions from a systematic review of studies published between Jan 1, 1995, and Dec 31, 2016, and unpublished data from 76 high quality population-based studies. We estimated the RSV-ALRI incidence for 132 developing countries using a risk factor-based model and 2015 population estimates. We estimated the in-hospital RSV-ALRI mortality by combining in-hospital case fatality ratios with hospital admission estimates from hospital-based (published and unpublished) studies. We also estimated overall RSV-ALRI mortality by identifying studies reporting monthly data for ALRI mortality in the community and RSV activity. We estimated that globally in 2015, 33·1 million (uncertainty range [UR] 21·6-50·3) episodes of RSV-ALRI, resulted in about 3·2 million (2·7-3·8) hospital admissions, and 59 600 (48 000-74 500) in-hospital deaths in children younger than 5 years. In children younger than 6 months, 1·4 million (UR 1·2-1·7) hospital admissions, and 27 300 (UR 20 700-36 200) in-hospital deaths were due to RSV-ALRI. We also estimated that the overall RSV-ALRI mortality could be as high as 118 200 (UR 94 600-149 400). Incidence and mortality varied substantially from year to year in any given population. Globally, RSV is a common cause of childhood ALRI and a major cause of hospital admissions in young children, resulting in a substantial burden on health-care services. About 45% of hospital admissions and in-hospital deaths due to RSV-ALRI occur in children younger than 6 months. An effective maternal RSV vaccine or monoclonal antibody could have a substantial effect on disease burden in this age group. The Bill & Melinda Gates Foundation. Copyright © 2017 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates.
Kostich, Mitchell S; Flick, Robert W; Batt, Angela L; Mash, Heath E; Boone, J Scott; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T
2017-02-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Published by Elsevier B.V.
How EIA Estimates Natural Gas Production
2004-01-01
The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
United States pulpwood receipts : softwood and hardwood, roundwood and residues, 1950-1996
C. Denise Ingram; Peter J. Ince; Ryan L. Mehlberg
1999-01-01
This report shows pulpwood receipts at wood pulp mills in the United States for the period 1950 to 1996. It is an update of the General Technical Report FPL1GTR173, bUnited States Pulpwood Receipts: Softwood and Hardwood, Roundwood and Residues, 195011989,c published in 1993. This report continues as a compilation of published and estimated data based on information...
NASA Astrophysics Data System (ADS)
Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios
2016-06-01
Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.
Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K
2011-01-01
In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
User's guide: RPGrow$: a red pine growth and analysis spreadsheet for the Lake States.
Carol A. Hyldahl; Gerald H. Grossman
1993-01-01
Describes RPGrow$, a stand-level, interactive spreadsheet for projecting growth and yield and estimating financial returns of red pine plantations in the Lake States. This spreadsheet is based on published growth models for red pine. Financial analyses are based on discounted cash flow methods.
Anzehaee, Mohammad Mousavi; Haeri, Mohammad
2011-07-01
New estimators are designed based on the modified force balance model to estimate the detaching droplet size, detached droplet size, and mean value of droplet detachment frequency in a gas metal arc welding process. The proper droplet size for the process to be in the projected spray transfer mode is determined based on the modified force balance model and the designed estimators. Finally, the droplet size and the melting rate are controlled using two proportional-integral (PI) controllers to achieve high weld quality by retaining the transfer mode and generating appropriate signals as inputs of the weld geometry control loop. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Mapako, Tonderai; Janssen, Mart P; Mvere, David A; Emmanuel, Jean C; Rusakaniko, Simbarashe; Postma, Maarten J; van Hulst, Marinus
2016-06-01
Various models for estimating the residual risk (RR) of transmission of infections by blood transfusion have been published mainly based on data from high-income countries. However, to obtain the data required for such an assessment remains challenging for most developing settings. The National Blood Service Zimbabwe (NBSZ) adapted a published incidence-window period (IWP) model, which has less demanding data requirements. In this study we assess the impact of various definitions of blood donor subpopulations and models on RR estimates. We compared the outcomes of two published models and an adapted NBSZ model. The Schreiber IWP model (Model 1), an amended version (Model 2), and an adapted NBSZ model (Model 3) were applied. Variably the three models include prevalence, incidence, preseroconversion intervals, mean lifetime risk, and person-years at risk. Annual mean RR estimates and 95% confidence intervals for each of the three models for human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV) were determined using NBSZ blood donor data from 2002 through 2011. The annual mean RR estimates for Models 1 through 3 were 1 in 6542, 5805, and 6418, respectively for HIV; 1 in 1978, 2027, and 1628 for HBV; and 1 in 9588, 15,126, and 7750, for HCV. The adapted NBSZ model provided comparable results to the published methods and these highlight the high occurrence of HBV in Zimbabwe. The adapted NBSZ model could be used as an alternative to estimate RRs when in settings where two repeat donations are not available. © 2016 AABB.
Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.
Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne
2016-04-01
We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.
A comparative review of estimates of the proportion unchanged genes and the false discovery rate
Broberg, Per
2005-01-01
Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR) and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information regarding false positive and negative rates as well as the proportion unchanged when identifying changed genes. PMID:16086831
Air pollution interventions and their impact on public health.
Henschel, Susann; Atkinson, Richard; Zeka, Ariana; Le Tertre, Alain; Analitis, Antonis; Katsouyanni, Klea; Chanel, Olivier; Pascal, Mathilde; Forsberg, Bertil; Medina, Sylvia; Goodman, Patrick G
2012-10-01
Numerous epidemiological studies have found a link between air pollution and health. We are reviewing a collection of published intervention studies with particular focus on studies assessing both improvements in air quality and associated health effects. Interventions, defined as events aimed at reducing air pollution or where reductions occurred as a side effect, e.g. strikes, German reunification, from the 1960s onwards were considered for inclusion. This review is not a complete record of all existing air pollution interventions. In total, 28 studies published in English were selected based on a systematic search of internet databases. Overall air pollution interventions have succeeded at improving air quality. Consistently published evidence suggests that most of these interventions have been associated with health benefits, mainly by the way of reduced cardiovascular and/or respiratory mortality and/or morbidity. The decrease in mortality from the majority of the reviewed interventions has been estimated to exceed the expected predicted figures based on the estimates from time-series studies. There is consistent evidence that decreased air pollution levels following an intervention resulted in health benefits for the assessed population.
El-Jaby, Samy
2016-06-01
A recent paper published in Life Sciences in Space Research (El-Jaby and Richardson, 2015) presented estimates of the secondary neutron ambient and effective dose equivalent rates, in air, from surface altitudes up to suborbital altitudes and low Earth orbit. These estimates were based on MCNPX (LANL, 2011) (Monte Carlo N-Particle eXtended) radiation transport simulations of galactic cosmic radiation passing through Earth's atmosphere. During a recent review of the input decks used for these simulations, a systematic error was discovered that is addressed here. After reassessment, the neutron ambient and effective dose equivalent rates estimated are found to be 10 to 15% different, though, the essence of the conclusions drawn remains unchanged. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Park, Benjamin J; Wannemuehler, Kathleen A; Marston, Barbara J; Govender, Nelesh; Pappas, Peter G; Chiller, Tom M
2009-02-20
Cryptococcal meningitis is one of the most important HIV-related opportunistic infections, especially in the developing world. In order to help develop global strategies and priorities for prevention and treatment, it is important to estimate the burden of cryptococcal meningitis. Global burden of disease estimation using published studies. We used the median incidence rate of available studies in a geographic region to estimate the region-specific cryptococcal meningitis incidence; this was multiplied by the 2007 United Nations Programme on HIV/AIDS HIV population estimate for each region to estimate cryptococcal meningitis cases. To estimate deaths, we assumed a 9% 3-month case-fatality rate among high-income regions, a 55% rate among low-income and middle-income regions, and a 70% rate in sub-Saharan Africa, based on studies published in these areas and expert opinion. Published incidence ranged from 0.04 to 12% per year among persons with HIV. Sub-Saharan Africa had the highest yearly burden estimate (median incidence 3.2%, 720 000 cases; range, 144 000-1.3 million). Median incidence was lowest in Western and Central Europe and Oceania (=0.1% each). Globally, approximately 957 900 cases (range, 371 700-1 544 000) of cryptococcal meningitis occur each year, resulting in 624 700 deaths (range, 125 000-1 124 900) by 3 months after infection. This study, the first attempt to estimate the global burden of cryptococcal meningitis, finds the number of cases and deaths to be very high, with most occurring in sub-Saharan Africa. Further work is needed to better define the scope of the problem and track the epidemiology of this infection, in order to prioritize prevention, diagnosis, and treatment strategies.
Why Was Kelvin's Estimate of the Earth's Age Wrong?
ERIC Educational Resources Information Center
Lovatt, Ian; Syed, M. Qasim
2014-01-01
This is a companion to our previous paper in which we give a published example, based primarily on Perry's work, of a graph of ln "y" versus "t" when "y" is an exponential function of "t". This work led us to the idea that Lord Kelvin's (William Thomson's) estimate of the Earth's age was…
Probabilistic Mass Growth Uncertainties
NASA Technical Reports Server (NTRS)
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Preliminary Multivariable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored
Zhong, Hui; Zhang, Wei; Qin, Min; Gou, ZhongPing; Feng, Ping
2017-06-01
Residual renal function needs to be assessed frequently in patients on continuous ambulatory peritoneal dialysis (CAPD). A commonly used method is to measure creatinine (Cr) and urea clearance in urine collected over 24 h, but collection can be cumbersome and difficult to manage. A faster, simpler alternative is to measure levels of cystatin C (CysC) in serum, but the accuracy and reliability of this method is controversial. Our study aims to validate published CysC-based equations for estimating residual renal function in patients on CAPD. Residual renal function was measured by calculating average clearance of urea and Cr in 24-h urine as well as by applying CysC- or Cr-based equations published by Hoek and Yang. We then compared the performance of the equations against the 24-h urine results. In our sample of 255 patients ages 47.9 ± 15.6 years, the serum CysC level was 6.43 ± 1.13 mg/L. Serum CysC level was not significantly associated with age, gender, height, weight, body mass index, hemoglobin, intact parathyroid hormone, normalized protein catabolic rate or the presence of diabetes. In contrast, serum CysC levels did correlate with peritoneal clearance of CysC and with levels of prealbumin and high-sensitivity C-reactive protein. Residual renal function was 2.56 ± 2.07 mL/min/1.73 m 2 based on 24-h urine sampling, compared with estimates (mL/min/1.73 m 2 ) of 2.98 ± 0.66 for Hoek's equation, 2.03 ± 0.97 for Yang's CysC-based equation and 2.70 ± 1.30 for Yang's Cr-based equation. Accuracies within 30%/50% of measured residual renal function for the three equations were 29.02/48.24, 34.90/56.86 and 31.37/54.90. The three equations for estimating residual renal function showed similar limits of agreement and differed significantly from the measured value. Published CysC-based equations do not appear to be particularly reliable for patients on CAPD. Further development and validation of CysC-based equations should take into account peritoneal clearance of CysC and other relevant factors. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Point estimation following two-stage adaptive threshold enrichment clinical trials.
Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel
2018-05-31
Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Larson, R L; Step, D L
2012-03-01
Bovine respiratory disease complex is the leading cause of morbidity and mortality in feedlot cattle. A number of vaccines against bacterial respiratory pathogens are commercially available and researchers have studied their impact on morbidity, mortality, and other disease outcome measures in feedlot cattle. A systematic review will provide veterinarians with a rigorous and transparent evaluation of the published literature to estimate the extent of vaccine effect. Unfortunately, the published body of evidence does not provide a consistent estimate of the direction and magnitude of effectiveness in feedlot cattle vaccination against Mannheimia haemolytica, Pasteurella multocida, or Histophilus somni.
Economics of Team-based Care in Controlling Blood Pressure: A Community Guide Systematic Review
Jacob, Verughese; Chattopadhyay, Sajal K.; Thota, Anilkrishna B.; Proia, Krista K.; Njie, Gibril; Hopkins, David P.; Finnie, Ramona K.C.; Pronk, Nicolaas P.; Kottke, Thomas E.
2015-01-01
Context High blood pressure is an important risk factor for cardiovascular disease (CVD) and stroke, the leading cause of death in the U.S. and a substantial national burden through lost productivity and medical care. A recent Community Guide systematic review found strong evidence of effectiveness of team-based care in improving blood pressure control. The objective of the present review was to determine from the economic literature whether team-based care for blood pressure control is cost-beneficial and/or cost-effective. Evidence acquisition Electronic databases of papers published January 1980 – May 2012 were searched to find economic evaluations of team-based care interventions to improve blood pressure outcomes, yielding 31 studies for inclusion. Evidence synthesis In analyses conducted in 2012, intervention cost, healthcare cost averted, benefit-to-cost ratios, and cost-effectiveness were abstracted from the studies. The quality of estimates for intervention and healthcare cost from each study were assessed using three elements: intervention focus on blood pressure control; incremental estimates in the intervention group relative to a control group; and inclusion of major cost-driving elements in estimates. Intervention cost per unit reduction in systolic blood pressure was converted to lifetime intervention cost per quality-adjusted life-year (QALY) saved using algorithms from published trials. Conclusion Team-based care to improve blood pressure control is cost-effective based on evidence that 26 of 28 estimates of $/QALY gained from 10 studies were below a conservative threshold of $50,000. This finding is salient to recent health care reforms in the U.S. and coordinated patient-centered care through formation of Accountable Care Organizations (ACOs). PMID:26477804
Prevalence of Intellectual Disability: A Meta-Analysis of Population-Based Studies
ERIC Educational Resources Information Center
Maulik, Pallab K.; Mascarenhas, Maya N.; Mathers, Colin D.; Dua, Tarun; Saxena, Shekhar
2011-01-01
Intellectual disability is an extremely stigmatizing condition and involves utilization of large public health resources, but most data about its burden is based on studies conducted in developed countries. The aim of this meta-analysis was to collate data from published literature and estimate the prevalence of intellectual disability across all…
Curriculum-Based Measurement of Oral Reading: Passage Equivalence and Probe-Set Development
ERIC Educational Resources Information Center
Christ, Theodore J.; Ardoin, Scott P.
2009-01-01
Curriculum-based measurement of reading (CBM-R) is used to estimate oral reading fluency. Unlike many traditional published tests, CBM-R materials are often comprised of 20 to 30 alternate forms/passages. Historically, CBM-R assessment materials were sampled from curricular materials. Recent research has documented the potentially deleterious…
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
Nonlinear system identification based on Takagi-Sugeno fuzzy modeling and unscented Kalman filter.
Vafamand, Navid; Arefi, Mohammad Mehdi; Khayatian, Alireza
2018-03-01
This paper proposes two novel Kalman-based learning algorithms for an online Takagi-Sugeno (TS) fuzzy model identification. The proposed approaches are designed based on the unscented Kalman filter (UKF) and the concept of dual estimation. Contrary to the extended Kalman filter (EKF) which utilizes derivatives of nonlinear functions, the UKF employs the unscented transformation. Consequently, non-differentiable membership functions can be considered in the structure of the TS models. This makes the proposed algorithms to be applicable for the online parameter calculation of wider classes of TS models compared to the recently published papers concerning the same issue. Furthermore, because of the great capability of the UKF in handling severe nonlinear dynamics, the proposed approaches can effectively approximate the nonlinear systems. Finally, numerical and practical examples are provided to show the advantages of the proposed approaches. Simulation results reveal the effectiveness of the proposed methods and performance improvement based on the root mean square (RMS) of the estimation error compared to the existing results. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Nair, Harish; Nokes, D James; Gessner, Bradford D; Dherani, Mukesh; Madhi, Shabir A; Singleton, Rosalyn J; O'Brien, Katherine L; Roca, Anna; Wright, Peter F; Bruce, Nigel; Chandran, Aruna; Theodoratou, Evropi; Sutanto, Agustinus; Sedyaningsih, Endang R; Ngama, Mwanajuma; Munywoki, Patrick K; Kartasasmita, Cissy; Simões, Eric A F; Rudan, Igor; Weber, Martin W; Campbell, Harry
2010-05-01
The global burden of disease attributable to respiratory syncytial virus (RSV) remains unknown. We aimed to estimate the global incidence of and mortality from episodes of acute lower respiratory infection (ALRI) due to RSV in children younger than 5 years in 2005. We estimated the incidence of RSV-associated ALRI in children younger than 5 years, stratified by age, using data from a systematic review of studies published between January, 1995, and June, 2009, and ten unpublished population-based studies. We estimated possible boundaries for RSV-associated ALRI mortality by combining case fatality ratios with incidence estimates from hospital-based reports from published and unpublished studies and identifying studies with population-based data for RSV seasonality and monthly ALRI mortality. In 2005, an estimated 33.8 (95% CI 19.3-46.2) million new episodes of RSV-associated ALRI occurred worldwide in children younger than 5 years (22% of ALRI episodes), with at least 3.4 (2.8-4.3) million episodes representing severe RSV-associated ALRI necessitating hospital admission. We estimated that 66 000-199 000 children younger than 5 years died from RSV-associated ALRI in 2005, with 99% of these deaths occurring in developing countries. Incidence and mortality can vary substantially from year to year in any one setting. Globally, RSV is the most common cause of childhood ALRI and a major cause of admission to hospital as a result of severe ALRI. Mortality data suggest that RSV is an important cause of death in childhood from ALRI, after pneumococcal pneumonia and Haemophilus influenzae type b. The development of novel prevention and treatment strategies should be accelerated as a priority. WHO; Bill & Melinda Gates Foundation. Copyright 2010 Elsevier Ltd. All rights reserved.
Boehler, Christian E. H.; Lord, Joanne
2016-01-01
Background. Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. Objectives. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Methods. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. Results. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%−19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Conclusions. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical question and may be atypical. PMID:25878194
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Braun, Cordula; Handoll, Helen H
2018-06-01
Knowledge about Minimal Important Differences (MIDs) is essential for the interpretation of continuous outcomes, especially patient-reported outcome measures (PROMS). The aim of this study was to estimate the MID for the Western Ontario Rotator Cuff Index (WORC: score 0 (best) to 2100 (worst disability)) in adults with shoulder pain associated with partial-thickness rotator cuff tears, 'symptomatic PTTs', undergoing conservative treatment with physiotherapy. A prospectively-designed anchor-based MID analysis using data from a prospective prognostic study with a three-month follow-up conducted within an outpatient care setting in Germany. The MID was estimated using data from 64 adults with atraumatic symptomatic PTTs who underwent three months of conservative treatment with physiotherapy. The anchor was a seven-point Global Perceived Change (GPC) scale. Based on a definition of the MID being the threshold of "being (at least slightly) improved" with a probability nearest to 0.90 (i.e. 9 of 10 patients achieving the MID), the MID for the WORC was estimated as -300 for 'improved' shoulder-related disability in 9 out of 10 patients (95% CI 8 out of 10 patients to everyone) undergoing three months of exercise-based physiotherapy for symptomatic PTTs. This is the first published MID estimate for the WORC in adults with symptomatic PTTs of the rotator cuff undergoing typical treatment comprising conservative treatment with physiotherapy. The conceptual framework for interpretation facilitates its use in similar clinical contexts. Copyright © 2018. Published by Elsevier Ltd.
Bhalla, Kavi; Harrison, James E
2016-04-01
Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Prostate-Specific Antigen (PSA)–Based Population Screening for Prostate Cancer: An Economic Analysis
Tawfik, A
2015-01-01
Background The prostate-specific antigen (PSA) blood test has become widely used in Canada to test for prostate cancer (PC), the most common cancer among Canadian men. Data suggest that population-based PSA screening may not improve overall survival. Objectives This analysis aimed to review existing economic evaluations of population-based PSA screening, determine current spending on opportunistic PSA screening in Ontario, and estimate the cost of introducing a population-based PSA screening program in the province. Methods A systematic literature search was performed to identify economic evaluations of population-based PSA screening strategies published from 1998 to 2013. Studies were assessed for their methodological quality and applicability to the Ontario setting. An original cost analysis was also performed, using data from Ontario administrative sources and from the published literature. One-year costs were estimated for 4 strategies: no screening, current (opportunistic) screening of men aged 40 years and older, current (opportunistic) screening of men aged 50 to 74 years, and population-based screening of men aged 50 to 74 years. The analysis was conducted from the payer perspective. Results The literature review demonstrated that, overall, population-based PSA screening is costly and cost-ineffective but may be cost-effective in specific populations. Only 1 Canadian study, published 15 years ago, was identified. Approximately $119.2 million is being spent annually on PSA screening of men aged 40 years and older in Ontario, including close to $22 million to screen men younger than 50 and older than 74 years of age (i.e., outside the target age range for a population-based program). A population-based screening program in Ontario would cost approximately $149.4 million in the first year. Limitations Estimates were based on the synthesis of data from a variety of sources, requiring several assumptions and causing uncertainty in the results. For example, where Ontario-specific data were unavailable, data from the United States were used. Conclusions PSA screening is associated with significant costs to the health care system when the cost of the PSA test itself is considered in addition to the costs of diagnosis, staging, and treatment of screen-detected PCs. PMID:26366237
Shear strength of clay and silt embankments.
DOT National Transportation Integrated Search
2009-09-01
Highway embankment is one of the most common large-scale geotechnical facilities constructed in Ohio. In the past, the design of these embankments was largely based on soil shear strength properties that had been estimated from previously published e...
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Subregional Nowcasts of Seasonal Influenza Using Search Trends.
Kandula, Sasikiran; Hsu, Daniel; Shaman, Jeffrey
2017-11-06
Limiting the adverse effects of seasonal influenza outbreaks at state or city level requires close monitoring of localized outbreaks and reliable forecasts of their progression. Whereas forecasting models for influenza or influenza-like illness (ILI) are becoming increasingly available, their applicability to localized outbreaks is limited by the nonavailability of real-time observations of the current outbreak state at local scales. Surveillance data collected by various health departments are widely accepted as the reference standard for estimating the state of outbreaks, and in the absence of surveillance data, nowcast proxies built using Web-based activities such as search engine queries, tweets, and access of health-related webpages can be useful. Nowcast estimates of state and municipal ILI were previously published by Google Flu Trends (GFT); however, validations of these estimates were seldom reported. The aim of this study was to develop and validate models to nowcast ILI at subregional geographic scales. We built nowcast models based on autoregressive (autoregressive integrated moving average; ARIMA) and supervised regression methods (Random forests) at the US state level using regional weighted ILI and Web-based search activity derived from Google's Extended Trends application programming interface. We validated the performance of these methods using actual surveillance data for the 50 states across six seasons. We also built state-level nowcast models using state-level estimates of ILI and compared the accuracy of these estimates with the estimates of the regional models extrapolated to the state level and with the nowcast estimates published by GFT. Models built using regional ILI extrapolated to state level had a median correlation of 0.84 (interquartile range: 0.74-0.91) and a median root mean square error (RMSE) of 1.01 (IQR: 0.74-1.50), with noticeable variability across seasons and by state population size. Model forms that hypothesize the availability of timely state-level surveillance data show significantly lower errors of 0.83 (0.55-0.23). Compared with GFT, the latter model forms have lower errors but also lower correlation. These results suggest that the proposed methods may be an alternative to the discontinued GFT and that further improvements in the quality of subregional nowcasts may require increased access to more finely resolved surveillance data. ©Sasikiran Kandula, Daniel Hsu, Jeffrey Shaman. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.11.2017.
Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo Chirici
2011-01-01
Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NEW STUDIES OF URBAN FLOOD FREQUENCY IN THE SOUTHEASTERN UNITED STATES.
Sauer, Vernon B.
1986-01-01
Five reports dealing with flood magnitude and frequency in urban areas in the southeastern United States have been published during the past 2 years by the U. S. Geological Survey (USGS). These reports are based on data collected in Tampa and Tallahassee, Florida; Atlanta, Georgia; and several cities in Alabama and Tennessee. Each report contains regression equations useful for estimating flood peaks for selected recurrence intervals at ungauged urban sites. A nationwide study of urban flood characteristics by the USGS published in 1983 contains equations for estimating urban peak discharges for ungauged sites. At the time that the nationwide study was conducted, data from only 35 sites in the southeastern United States were available. The five new reports contain data for 88 additional sites. These new data show that the seven-parameter estimating equations developed in the nationwide study are unbiased and have prediction errors less than those described in the nationwide report.
Sarilita, Erli; Rynn, Christopher; Mossey, Peter A; Black, Sue; Oscandar, Fahmi
2018-05-01
This study investigated nose profile morphology and its relationship to the skull in Scottish subadult and Indonesian adult populations, with the aim of improving the accuracy of forensic craniofacial reconstruction. Samples of 86 lateral head cephalograms from Dundee Dental School (mean age, 11.8 years) and 335 lateral head cephalograms from the Universitas Padjadjaran Dental Hospital, Bandung, Indonesia (mean age 24.2 years), were measured. The method of nose profile estimation based on skull morphology previously proposed by Rynn and colleagues in 2010 (FSMP 6:20-34) was tested in this study. Following this method, three nasal aperture-related craniometrics and six nose profile dimensions were measured from the cephalograms. To assess the accuracy of the method, six nose profile dimensions were estimated from the three craniometric parameters using the published method and then compared to the actual nose profile dimensions.In the Scottish subadult population, no sexual dimorphism was evident in the measured dimensions. In contrast, sexual dimorphism of the Indonesian adult population was evident in all craniometric and nose profile dimensions; notably, males exhibited statistically significant larger values than females. The published method by Rynn and colleagues (FSMP 6:20-34, 2010) performed better in the Scottish subadult population (mean difference of maximum, 2.35 mm) compared to the Indonesian adult population (mean difference of maximum, 5.42 mm in males and 4.89 mm in females).In addition, regression formulae were derived to estimate nose profile dimensions based on the craniometric measurements for the Indonesian adult population. The published method is not sufficiently accurate for use on the Indonesian population, so the derived method should be used. The accuracy of the published method by Rynn and colleagues (FSMP 6:20-34, 2010) was sufficiently reliable to be applied in Scottish subadult population.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
van Aert, Robbie C M; Jackson, Dan
2018-04-26
A wide variety of estimators of the between-study variance are available in random-effects meta-analysis. Many, but not all, of these estimators are based on the method of moments. The DerSimonian-Laird estimator is widely used in applications, but the Paule-Mandel estimator is an alternative that is now recommended. Recently, DerSimonian and Kacker have developed two-step moment-based estimators of the between-study variance. We extend these two-step estimators so that multiple (more than two) steps are used. We establish the surprising result that the multistep estimator tends towards the Paule-Mandel estimator as the number of steps becomes large. Hence, the iterative scheme underlying our new multistep estimator provides a hitherto unknown relationship between two-step estimators and Paule-Mandel estimator. Our analysis suggests that two-step estimators are not necessarily distinct estimators in their own right; instead, they are quantities that are closely related to the usual iterative scheme that is used to calculate the Paule-Mandel estimate. The relationship that we establish between the multistep and Paule-Mandel estimator is another justification for the use of the latter estimator. Two-step and multistep estimators are perhaps best conceptualized as approximate Paule-Mandel estimators. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Comparing estimates of climate change impacts from process-based and statistical crop models
NASA Astrophysics Data System (ADS)
Lobell, David B.; Asseng, Senthold
2017-01-01
The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally requiring fewer resources to produce robust estimates, especially when applied to crops beyond the major grains.
Page, Matthew J; McKenzie, Joanne E; Kirkham, Jamie; Dwan, Kerry; Kramer, Sharon; Green, Sally; Forbes, Andrew
2014-10-01
Systematic reviews may be compromised by selective inclusion and reporting of outcomes and analyses. Selective inclusion occurs when there are multiple effect estimates in a trial report that could be included in a particular meta-analysis (e.g. from multiple measurement scales and time points) and the choice of effect estimate to include in the meta-analysis is based on the results (e.g. statistical significance, magnitude or direction of effect). Selective reporting occurs when the reporting of a subset of outcomes and analyses in the systematic review is based on the results (e.g. a protocol-defined outcome is omitted from the published systematic review). To summarise the characteristics and synthesise the results of empirical studies that have investigated the prevalence of selective inclusion or reporting in systematic reviews of randomised controlled trials (RCTs), investigated the factors (e.g. statistical significance or direction of effect) associated with the prevalence and quantified the bias. We searched the Cochrane Methodology Register (to July 2012), Ovid MEDLINE, Ovid EMBASE, Ovid PsycINFO and ISI Web of Science (each up to May 2013), and the US Agency for Healthcare Research and Quality (AHRQ) Effective Healthcare Program's Scientific Resource Center (SRC) Methods Library (to June 2013). We also searched the abstract books of the 2011 and 2012 Cochrane Colloquia and the article alerts for methodological work in research synthesis published from 2009 to 2011 and compiled in Research Synthesis Methods. We included both published and unpublished empirical studies that investigated the prevalence and factors associated with selective inclusion or reporting, or both, in systematic reviews of RCTs of healthcare interventions. We included empirical studies assessing any type of selective inclusion or reporting, such as investigations of how frequently RCT outcome data is selectively included in systematic reviews based on the results, outcomes and analyses are discrepant between protocol and published review or non-significant outcomes are partially reported in the full text or summary within systematic reviews. Two review authors independently selected empirical studies for inclusion, extracted the data and performed a risk of bias assessment. A third review author resolved any disagreements about inclusion or exclusion of empirical studies, data extraction and risk of bias. We contacted authors of included studies for additional unpublished data. Primary outcomes included overall prevalence of selective inclusion or reporting, association between selective inclusion or reporting and the statistical significance of the effect estimate, and association between selective inclusion or reporting and the direction of the effect estimate. We combined prevalence estimates and risk ratios (RRs) using a random-effects meta-analysis model. Seven studies met the inclusion criteria. No studies had investigated selective inclusion of results in systematic reviews, or discrepancies in outcomes and analyses between systematic review registry entries and published systematic reviews. Based on a meta-analysis of four studies (including 485 Cochrane Reviews), 38% (95% confidence interval (CI) 23% to 54%) of systematic reviews added, omitted, upgraded or downgraded at least one outcome between the protocol and published systematic review. The association between statistical significance and discrepant outcome reporting between protocol and published systematic review was uncertain. The meta-analytic estimate suggested an increased risk of adding or upgrading (i.e. changing a secondary outcome to primary) when the outcome was statistically significant, although the 95% CI included no association and a decreased risk as plausible estimates (RR 1.43, 95% CI 0.71 to 2.85; two studies, n = 552 meta-analyses). Also, the meta-analytic estimate suggested an increased risk of downgrading (i.e. changing a primary outcome to secondary) when the outcome was statistically significant, although the 95% CI included no association and a decreased risk as plausible estimates (RR 1.26, 95% CI 0.60 to 2.62; two studies, n = 484 meta-analyses). None of the included studies had investigated whether the association between statistical significance and adding, upgrading or downgrading of outcomes was modified by the type of comparison, direction of effect or type of outcome; or whether there is an association between direction of the effect estimate and discrepant outcome reporting.Several secondary outcomes were reported in the included studies. Two studies found that reasons for discrepant outcome reporting were infrequently reported in published systematic reviews (6% in one study and 22% in the other). One study (including 62 Cochrane Reviews) found that 32% (95% CI 21% to 45%) of systematic reviews did not report all primary outcomes in the abstract. Another study (including 64 Cochrane and 118 non-Cochrane reviews) found that statistically significant primary outcomes were more likely to be completely reported in the systematic review abstract than non-significant primary outcomes (RR 2.66, 95% CI 1.81 to 3.90). None of the studies included systematic reviews published after 2009 when reporting standards for systematic reviews (Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement, and Methodological Expectations of Cochrane Intervention Reviews (MECIR)) were disseminated, so the results might not be generalisable to more recent systematic reviews. Discrepant outcome reporting between the protocol and published systematic review is fairly common, although the association between statistical significance and discrepant outcome reporting is uncertain. Complete reporting of outcomes in systematic review abstracts is associated with statistical significance of the results for those outcomes. Systematic review outcomes and analysis plans should be specified prior to seeing the results of included studies to minimise post-hoc decisions that may be based on the observed results. Modifications that occur once the review has commenced, along with their justification, should be clearly reported. Effect estimates and CIs should be reported for all systematic review outcomes regardless of the results. The lack of research on selective inclusion of results in systematic reviews needs to be addressed and studies that avoid the methodological weaknesses of existing research are also needed.
ERIC Educational Resources Information Center
Arvedson, Joan; Clark, Heather; Lazarus, Cathy; Schooling, Tracy; Frymark, Tobi
2010-01-01
Purpose: To conduct an evidence-based systematic review and provide an estimate of the effects of oral motor interventions (OMIs) on feeding/swallowing outcomes (both physiological and functional) and pulmonary health in preterm infants. Method: A systematic search of the literature published from 1960 to 2007 was conducted. Articles meeting the…
Ortiz-Hernández, Luis; Vega López, A Valeria; Ramos-Ibáñez, Norma; Cázares Lara, L Joana; Medina Gómez, R Joab; Pérez-Salgado, Diana
To develop and validate equations to estimate the percentage of body fat of children and adolescents from Mexico using anthropometric measurements. A cross-sectional study was carried out with 601 children and adolescents from Mexico aged 5-19 years. The participants were randomly divided into the following two groups: the development sample (n=398) and the validation sample (n=203). The validity of previously published equations (e.g., Slaughter) was also assessed. The percentage of body fat was estimated by dual-energy X-ray absorptiometry. The anthropometric measurements included height, sitting height, weight, waist and arm circumferences, skinfolds (triceps, biceps, subscapular, supra-iliac, and calf), and elbow and bitrochanteric breadth. Linear regression models were estimated with the percentage of body fat as the dependent variable and the anthropometric measurements as the independent variables. Equations were created based on combinations of six to nine anthropometric variables and had coefficients of determination (r 2 ) equal to or higher than 92.4% for boys and 85.8% for girls. In the validation sample, the developed equations had high r 2 values (≥85.6% in boys and ≥78.1% in girls) in all age groups, low standard errors (SE≤3.05% in boys and ≤3.52% in girls), and the intercepts were not different from the origin (p>0.050). Using the previously published equations, the coefficients of determination were lower, and/or the intercepts were different from the origin. The equations developed in this study can be used to assess the percentage of body fat of Mexican schoolchildren and adolescents, as they demonstrate greater validity and lower error compared with previously published equations. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
C:n:p Stoichiometry of New Production In The North Atlantic
NASA Astrophysics Data System (ADS)
Koeve, W.
Recently and independently published estimates of global net community production which were based on seasonal changes of either nutrients (NO3 and PO4) or dissolved inorganic carbon (DIC) in the surface ocean indicate that the stoichiometry of new pro- duction strongly differs from the well established remineralisation ratios in the deep ocean (the Redfield ratio). This difference appears to be most pronounce in the North Atlantic ocean. Data quality issues as well as methodological differences in the data analysis applied in the published studies, however, make this comparison of nutri- ent and carbon based estimated ambigious. In this presentation historical data (World Ocean Atlas and Data 1998), data from the World Ocean Circulation Experiment and empirical approaches are combined in a consistent way to provide a reassessment of the C:N:P elemental ratio of new (export) production in the North Atlantic. It is found that published nutrient budgets are severe underestimates and hence apparent C:N:P ratios were overestimated. At least in the North Atlantic the uncertainty of the winter time distribution of nutrients (and DIC) is a major source of the uncertainty of the C:N:P ratio of net community production.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.
A toy model for the yield of a tamped fission bomb
NASA Astrophysics Data System (ADS)
Reed, B. Cameron
2018-02-01
A simple expression is developed for estimating the yield of a tamped fission bomb, that is, a basic nuclear weapon comprising a fissile core jacketed by a surrounding neutron-reflecting tamper. This expression is based on modeling the nuclear chain reaction as a geometric progression in combination with a previously published expression for the threshold-criticality condition for such a core. The derivation is especially straightforward, as it requires no knowledge of diffusion theory and should be accessible to students of both physics and policy. The calculation can be set up as a single page spreadsheet. Application to the Little Boy and Fat Man bombs of World War II gives results in reasonable accord with published yield estimates for these weapons.
Using flow cytometry to estimate pollen DNA content: improved methodology and applications
Kron, Paul; Husband, Brian C.
2012-01-01
Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815
Freeman, Matthew C; Stocks, Meredith E; Cumming, Oliver; Jeandron, Aurelie; Higgins, Julian P T; Wolf, Jennyfer; Prüss-Ustün, Annette; Bonjour, Sophie; Hunter, Paul R; Fewtrell, Lorna; Curtis, Valerie
2014-08-01
To estimate the global prevalence of handwashing with soap and derive a pooled estimate of the effect of hygiene on diarrhoeal diseases, based on a systematic search of the literature. Studies with data on observed rates of handwashing with soap published between 1990 and August 2013 were identified from a systematic search of PubMed, Embase and ISI Web of Knowledge. A separate search was conducted for studies on the effect of hygiene on diarrhoeal disease that included randomised controlled trials, quasi-randomised trials with control group, observational studies using matching techniques and observational studies with a control group where the intervention was well defined. The search used Cochrane Library, Global Health, BIOSIS, PubMed, and Embase databases supplemented with reference lists from previously published systematic reviews to identify studies published between 1970 and August 2013. Results were combined using multilevel modelling for handwashing prevalence and meta-regression for risk estimates. From the 42 studies reporting handwashing prevalence we estimate that approximately 19% of the world population washes hands with soap after contact with excreta (i.e. use of a sanitation facility or contact with children's excreta). Meta-regression of risk estimates suggests that handwashing reduces the risk of diarrhoeal disease by 40% (risk ratio 0.60, 95% CI 0.53-0.68); however, when we included an adjustment for unblinded studies, the effect estimate was reduced to 23% (risk ratio 0.77, 95% CI 0.32-1.86). Our results show that handwashing after contact with excreta is poorly practiced globally, despite the likely positive health benefits. © 2014 John Wiley & Sons Ltd.
2008-02-01
1994, Chiou and Kile (USGS, 2000) 3.2.2 Source Characteristics The source area was based on estimates of the locations where contaminants were...values for Koc and solubility for some of the SVOC’s appear in published literature (Chiou and Kile , 2000), which suggests a larger range of...Monitored Natural Attenuation. United States Geological Survey Water Resources Investigations Report 03-4057. Chiou, C.T. and Kile , D.E., 2000
Location Estimation of Urban Images Based on Geographical Neighborhoods
NASA Astrophysics Data System (ADS)
Huang, Jie; Lo, Sio-Long
2018-04-01
Estimating the location of an image is a challenging computer vision problem, and the recent decade has witnessed increasing research efforts towards the solution of this problem. In this paper, we propose a new approach to the location estimation of images taken in urban environments. Experiments are conducted to quantitatively compare the estimation accuracy of our approach, against three representative approaches in the existing literature, using a recently published dataset of over 150 thousand Google Street View images and 259 user uploaded images as queries. According to the experimental results, our approach outperforms three baseline approaches and shows its robustness across different distance thresholds.
Sartorius, B; Sartorius, K; Aldous, C; Madiba, T E; Stefan, C; Noakes, T
2016-01-01
Introduction Linkages between carbohydrates, obesity and cancer continue to demonstrate conflicting results. Evidence suggests inconclusive direct linkages between carbohydrates and specific cancers. Conversely, obesity has been strongly linked to a wide range of cancers. The purpose of the study is to explore linkages between carbohydrate intake and cancer types using a two-step approach. First the study will evaluate the linkages between carbohydrate intake and obesity, potentially stratified by metabolic syndrome status. Second, the estimated attributable fraction of obesity ascribed to carbohydrate intake will be multiplied against obesity attributable fractions for cancer types to give estimated overall attributable fraction for carbohydrate versus cancer type. Methods and analysis We will perform a comprehensive search to identify all possible published and unpublished studies that have assessed risk factors for obesity including dietary carbohydrate intake. Scientific databases, namely PubMed MEDLINE, EMBASE, EBSCOhost and ISI Web of Science will be searched. Following study selection, paper/data acquisition, and data extraction and synthesis, we will appraise the quality of studies and risk of bias, as well as assess heterogeneity. Meta-weighted attributable fractions of obesity due to carbohydrate intake will be estimated after adjusting for other potential confounding factors (eg, physical inactivity, other dietary intake). Furthermore, previously published systematic reviews assessing the cancer-specific risk associated with obesity will also be drawn. These estimates will be linked with the attributability of carbohydrate intake in part 1 to estimate the cancer-specific burden that can be attributed to dietary carbohydrates. This systematic review protocol has been developed according to the ‘Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) 2015’. Ethics and dissemination The current study will be based on published literature and data, and, as such, ethics approval is not required. The final results of this two part systematic review (plus multiplicative calculations) will be published in a relevant international peer-reviewed journal. Trial registration number PROSPERO CRD42015023257. PMID:26729382
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Bauman, Adrian; Milton, Karen; Kariuki, Maina; Fedel, Karla; Lewicka, Mary
2017-11-28
The proliferation of studies using motivational signs to promote stair use continues unabated, with their oft-cited potential for increasing population-level physical activity participation. This study examined all stair use promotional signage studies since 1980, calculating pre-estimates and post-estimates of stair use. The aim of this project was to conduct a sequential meta-analysis to pool intervention effects, in order to determine when the evidence base was sufficient for population-wide dissemination. Using comparable data from 50 stair-promoting studies (57 unique estimates) we pooled data to assess the effect sizes of such interventions. At baseline, median stair usage across interventions was 8.1%, with an absolute median increase of 2.2% in stair use following signage-based interventions. The overall pooled OR indicated that participants were 52% more likely to use stairs after exposure to promotional signs (adjusted OR 1.52, 95% CI 1.37 to 1.70). Incremental (sequential) meta-analyses using z-score methods identified that sufficient evidence for stair use interventions has existed since 2006, with recent studies providing no further evidence on the effect sizes of such interventions. This analysis has important policy and practice implications. Researchers continue to publish stair use interventions without connection to policymakers' needs, and few stair use interventions are implemented at a population level. Researchers should move away from repeating short-term, small-scale, stair sign interventions, to investigating their scalability, adoption and fidelity. Only such research translation efforts will provide sufficient evidence of external validity to inform their scaling up to influence population physical activity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Stuckey, Rwth; LaMontagne, Anthony D; Glass, Deborah C; Sim, Malcolm R
2010-04-01
To estimate occupational light vehicle (OLV) fatality numbers using vehicle registration and crash data and compare these with previous estimates based on workers' compensation data. New South Wales (NSW) Roads and Traffic Authority (RTA) vehicle registration and crash data were obtained for 2004. NSW is the only Australian jurisdiction with mandatory work-use registration, which was used as a proxy for work-relatedness. OLV fatality rates based on registration data as the denominator were calculated and comparisons made with published 2003/04 fatalities based on workers' compensation data. Thirty-four NSW RTA OLV-user fatalities were identified, a rate of 4.5 deaths per 100,000 organisationally registered OLV, whereas the Australian Safety and Compensation Council (ASCC), reported 28 OLV deaths Australia-wide. More OLV user fatalities were identified from vehicle registration-based data than those based on workers' compensation estimates and the data are likely to provide an improved estimate of fatalities specific to OLV use. OLV-use is an important cause of traumatic fatalities that would be better identified through the use of vehicle-registration data, which provides a stronger evidence base from which to develop policy responses. © 2010 The Authors. Journal Compilation © 2010 Public Health Association of Australia.
NASA Astrophysics Data System (ADS)
Weidner, E. F.; Weber, T. C.; Mayer, L. A.
2017-12-01
Quantifying methane flux originating from marine seep systems in climatically sensitive regions is of critically importance for current and future climate studies. Yet, the methane contribution from these systems has been difficult to estimate given the broad spatial scale of the ocean and the heterogeneity of seep activity. One such region is the Eastern Siberian Arctic Sea (ESAS), where bubble release into the shallow water column (<40 meters average depth) facilitates transport of methane to the atmosphere without oxidation. Quantifying the current seep methane flux from the ESAS is necessary to understand not only the total ocean methane budget, but also to provide baseline estimates against which future climate-induced changes can be measured. At the 2016 AGU fall meeting, we presented a new acoustic-based flux methodology using a calibrated broadband split-beam echosounder. The broad (14-24 kHz) bandwidth provides a vertical resolution of 10 cm, making possible the identification of single bubbles. After calibration using 64 mm copper sphere of known backscatter, the acoustic backscatter of individual bubbles is measured and compared to analytical models to estimate bubble radius. Additionally, bubbles are precisely located and traced upwards through the water column to estimate rise velocity. The combination of radius and rise velocity allows for gas flux estimation. Here, we follow up with the completed implementation of this methodology applied to the Herald Canyon region of the western ESAS. From the 68 recognized seeps, bubble radii and rise velocity were computed for more than 550 individual bubbles. The range of bubble radii, 1-6 mm, is comparable to those published by other investigators, while the radius dependent rise velocities are consistent with published models. Methane flux for the Herald Canyon region was estimated by extrapolation from individual seep flux values.
Herdağdelen, Amaç; Marelli, Marco
2017-05-01
Corpus-based word frequencies are one of the most important predictors in language processing tasks. Frequencies based on conversational corpora (such as movie subtitles) are shown to better capture the variance in lexical decision tasks compared to traditional corpora. In this study, we show that frequencies computed from social media are currently the best frequency-based estimators of lexical decision reaction times (up to 3.6% increase in explained variance). The results are robust (observed for Twitter- and Facebook-based frequencies on American English and British English datasets) and are still substantial when we control for corpus size. © 2016 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
ERIC Educational Resources Information Center
Christ, Theodore J.; Desjardins, Christopher David
2018-01-01
Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR's lack of validity and reliability, and…
Evolution of accelerometer methods for physical activity research.
Troiano, Richard P; McClain, James J; Brychta, Robert J; Chen, Kong Y
2014-07-01
The technology and application of current accelerometer-based devices in physical activity (PA) research allow the capture and storage or transmission of large volumes of raw acceleration signal data. These rich data not only provide opportunities to improve PA characterisation, but also bring logistical and analytic challenges. We discuss how researchers and developers from multiple disciplines are responding to the analytic challenges and how advances in data storage, transmission and big data computing will minimise logistical challenges. These new approaches also bring the need for several paradigm shifts for PA researchers, including a shift from count-based approaches and regression calibrations for PA energy expenditure (PAEE) estimation to activity characterisation and EE estimation based on features extracted from raw acceleration signals. Furthermore, a collaborative approach towards analytic methods is proposed to facilitate PA research, which requires a shift away from multiple independent calibration studies. Finally, we make the case for a distinction between PA represented by accelerometer-based devices and PA assessed by self-report. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Hori, Megumi; Matsuda, Tomohiro; Shibata, Akiko; Katanoda, Kota; Sobue, Tomotaka; Nishimoto, Hiroshi
2015-09-01
The Japan Cancer Surveillance Research Group aimed to estimate the cancer incidence in Japan in 2009 based on data collected from 32 of 37 population-based cancer registries, as part of the Monitoring of Cancer Incidence in Japan (MCIJ) project. The incidence of only primary invasive cancer in Japan for 2009 was estimated to be 775 601. Stomach cancer and breast cancer were the leading types of cancer in males and females, respectively. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Assessing uncertainty in published risk estimates using ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.
Failure of self-consistency in the discrete resource model of visual working memory.
Bays, Paul M
2018-06-03
The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.
Kozloski, G V; Stefanello, C M; Oliveira, L; Filho, H M N Ribeiro; Klopfenstein, T J
2017-02-01
A data set of individual observations was compiled from digestibility trials to examine the relationship between the duodenal purine bases (PB) flow and urinary purine derivatives (PD) excretion and the validity of different equations for estimating rumen microbial N (Nm) supply based on urinary PD in comparison with estimates based on duodenal PB. Trials (8 trials, = 185) were conducted with male sheep fitted with a duodenal T-type cannula, housed in metabolic cages, and fed forage alone or with supplements. The amount of PD excreted in urine was linearly related to the amount of PB flowing to the duodenum ( < 0.05). The intercept of the linear regression was 0.180 mmol/(d·kg), representing the endogenous excretion of PD, and the slope was lower than 1 ( < 0.05), indicating that only 0.43% of the PB in the duodenum was excreted as PD in urine. The Nm supply estimated by either approach was linearly related ( < 0.05) to the digestible OM intake. However, the Nm supply estimated through either of 3 published PD-based equations probably underestimated the Nm supply in sheep.
Asteroid mass estimation using Markov-Chain Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2016-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.
Calibrating random forests for probability estimation.
Dankowski, Theresa; Ziegler, Andreas
2016-09-30
Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny
Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.
2012-01-01
An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will bias estimates of divergence time, we suggest mutation rate estimates be used until better models are available. PMID:22683811
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Distance measures and optimization spaces in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.
2015-01-01
Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.
Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.
2010-01-01
Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean difference is not large (less than one-tenth of the average prediction error for the regression-equation estimates). The sensitivity of the flood-quantile estimates to differences in the computed basin characteristics are determined and presented in tables. A test of usage consistency was conducted by having at least 7 new users compute flood quantile estimates at 27 locations. The average maximum deviation of the estimate from the mode value at each site was 1.31 percent after four mislocated sites were removed. A comparison of manual 100-year flood-quantile computations with ILSS at 34 sites indicated no statistically significant difference. ILSS appears to be an accurate, reliable, and effective tool for flood-quantile estimates.
NURD: an implementation of a new method to estimate isoform expression from non-uniform RNA-seq data
2013-01-01
Background RNA-Seq technology has been used widely in transcriptome study, and one of the most important applications is to estimate the expression level of genes and their alternative splicing isoforms. There have been several algorithms published to estimate the expression based on different models. Recently Wu et al. published a method that can accurately estimate isoform level expression by considering position-related sequencing biases using nonparametric models. The method has advantages in handling different read distributions, but there hasn’t been an efficient program to implement this algorithm. Results We developed an efficient implementation of the algorithm in the program NURD. It uses a binary interval search algorithm. The program can correct both the global tendency of sequencing bias in the data and local sequencing bias specific to each gene. The correction makes the isoform expression estimation more reliable under various read distributions. And the implementation is computationally efficient in both the memory cost and running time and can be readily scaled up for huge datasets. Conclusion NURD is an efficient and reliable tool for estimating the isoform expression level. Given the reads mapping result and gene annotation file, NURD will output the expression estimation result. The package is freely available for academic use at http://bioinfo.au.tsinghua.edu.cn/software/NURD/. PMID:23837734
A robust approach for ECG-based analysis of cardiopulmonary coupling.
Zheng, Jiewen; Wang, Weidong; Zhang, Zhengbo; Wu, Dalei; Wu, Hao; Peng, Chung-Kang
2016-07-01
Deriving respiratory signal from a surface electrocardiogram (ECG) measurement has advantage of simultaneously monitoring of cardiac and respiratory activities. ECG-based cardiopulmonary coupling (CPC) analysis estimated by heart period variability and ECG-derived respiration (EDR) shows promising applications in medical field. The aim of this paper is to provide a quantitative analysis of the ECG-based CPC, and further improve its performance. Two conventional strategies were tested to obtain EDR signal: R-S wave amplitude and area of the QRS complex. An adaptive filter was utilized to extract the common component of inter-beat interval (RRI) and EDR, generating enhanced versions of EDR signal. CPC is assessed through probing the nonlinear phase interactions between RRI series and respiratory signal. Respiratory oscillations presented in both RRI series and respiratory signals were extracted by ensemble empirical mode decomposition for coupling analysis via phase synchronization index. The results demonstrated that CPC estimated from conventional EDR series exhibits constant and proportional biases, while that estimated from enhanced EDR series is more reliable. Adaptive filtering can improve the accuracy of the ECG-based CPC estimation significantly and achieve robust CPC analysis. The improved ECG-based CPC estimation may provide additional prognostic information for both sleep medicine and autonomic function analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Yellman, Merissa A; Peterson, Cora; McCoy, Mary A; Stephens-Stidham, Shelli; Caton, Emily; Barnard, Jeffrey J; Padgett, Ted O; Florence, Curtis; Istre, Gregory R
2018-02-01
Operation Installation (OI), a community-based smoke alarm installation programme in Dallas, Texas, targets houses in high-risk urban census tracts. Residents of houses that received OI installation (or programme houses) had 68% fewer medically treated house fire injuries (non-fatal and fatal) compared with residents of non-programme houses over an average of 5.2 years of follow-up during an effectiveness evaluation conducted from 2001 to 2011. To estimate the cost-benefit of OI. A mathematical model incorporated programme cost and effectiveness data as directly observed in OI. The estimated cost per smoke alarm installed was based on a retrospective analysis of OI expenditures from administrative records, 2006-2011. Injury incidence assumptions for a population that had the OI programme compared with the same population without the OI programme was based on the previous OI effectiveness study, 2001-2011. Unit costs for medical care and lost productivity associated with fire injuries were from a national public database. From a combined payers' perspective limited to direct programme and medical costs, the estimated incremental cost per fire injury averted through the OI installation programme was $128,800 (2013 US$). When a conservative estimate of lost productivity among victims was included, the incremental cost per fire injury averted was negative, suggesting long-term cost savings from the programme. The OI programme from 2001 to 2011 resulted in an estimated net savings of $3.8 million, or a $3.21 return on investment for every dollar spent on the programme using a societal cost perspective. Community smoke alarm installation programmes could be cost-beneficial in high-fire-risk neighbourhoods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Devji, Tahira; Guyatt, Gordon H; Lytvyn, Lyubov; Brignardello-Petersen, Romina; Foroutan, Farid; Sadeghirad, Behnam; Buchbinder, Rachelle; Poolman, Rudolf W; Harris, Ian A; Carrasco-Labra, Alonso; Siemieniuk, Reed A C; Vandvik, Per O
2017-05-11
To identify the most credible anchor-based minimal important differences (MIDs) for patient important outcomes in patients with degenerative knee disease, and to inform BMJ Rapid Recommendations for arthroscopic surgery versus conservative management DESIGN: Systematic review. Estimates of anchor-based MIDs, and their credibility, for knee symptoms and health-related quality of life (HRQoL). MEDLINE, EMBASE and PsycINFO. We included original studies documenting the development of anchor-based MIDs for patient-reported outcomes (PROs) reported in randomised controlled trials included in the linked systematic review and meta-analysis and judged by the parallel BMJ Rapid Recommendations panel as critically important for informing their recommendation: measures of pain, function and HRQoL. 13 studies reported 95 empirically estimated anchor-based MIDs for 8 PRO instruments and/or their subdomains that measure knee pain, function or HRQoL. All studies used a transition rating (global rating of change) as the anchor to ascertain the MID. Among PROs with more than 1 estimated MID, we found wide variation in MID values. Many studies suffered from serious methodological limitations. We identified the following most credible MIDs: Western Ontario and McMaster University Osteoarthritis Index (WOMAC; pain: 12, function: 13), Knee injury and Osteoarthritis Outcome Score (KOOS; pain: 12, activities of daily living: 8) and EuroQol five dimensions Questionnaire (EQ-5D; 0.15). We were able to distinguish between more and less credible MID estimates and provide best estimates for key instruments that informed evidence presentation in the associated systematic review and judgements made by the Rapid Recommendation panel. CRD42016047912. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Fetal QRS detection and heart rate estimation: a wavelet-based approach.
Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula
2014-08-01
Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR.
Huang, Liping; Crino, Michelle; Wu, Jason H Y; Woodward, Mark; Barzi, Federica; Land, Mary-Anne; McLean, Rachael; Webster, Jacqui; Enkhtungalag, Batsaikhan; Neal, Bruce
2016-02-01
Estimating equations based on spot urine samples have been identified as a possible alternative approach to 24-h urine collections for determining mean population salt intake. This review compares estimates of mean population salt intake based upon spot and 24-h urine samples. We systematically searched for all studies that reported estimates of daily salt intake based upon both spot and 24-h urine samples for the same population. The associations between the two were quantified and compared overall and in subsets of studies. A total of 538 records were identified, 108 were assessed as full text and 29 were included. The included studies involved 10,414 participants from 34 countries and made 71 comparisons available for the primary analysis. Overall average population salt intake estimated from 24-h urine samples was 9.3 g/day compared with 9.0 g/day estimated from the spot urine samples. Estimates based upon spot urine samples had excellent sensitivity (97%) and specificity (100%) at classifying mean population salt intake as above or below the World Health Organization maximum target of 5 g/day. Compared with the 24-h samples, estimates based upon spot urine overestimated intake at lower levels of consumption and underestimated intake at higher levels of consumption. Estimates of mean population salt intake based upon spot urine samples can provide countries with a good indication of mean population salt intake and whether action on salt consumption is required. Published by Oxford University Press on behalf of the International Epidemiological Association 2015. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.
2008-05-01
Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.
Potential health economic benefits of vitamin supplementation.
Bendich, A; Mallick, R; Leader, S
1997-01-01
This study used published relative risk estimates for birth defects, premature birth, and coronary heart disease associated with vitamin intake to project potential annual cost reductions in U.S. hospitalization charges. Epidemiological and intervention studies with relative risk estimates were identified via MEDLINE. Preventable fraction estimates were derived from data on the percentage of at-risk Americans with daily vitamin intake levels lower than those associated with disease risk reduction. Hospitalization rates were obtained from the 1992 National Hospital Discharge Survey. Charge data from the 1993 California Hospital Discharge Survey were adjusted to 1995 national charges using the medical component of the Consumer Price Index. Based on published risk reductions, annual hospital charges for birth defects, low-birth-weight premature births, and coronary heart disease could be reduced by about 40, 60, and 38%, respectively. For the conditions studied, nearly $20 billion in hospital charges were potentially avoidable with daily use of folic acid and zinc-containing multivitamins by all women of childbearing age and daily vitamin E supplementation by those over 50. PMID:9217432
Sliding mode output feedback control based on tracking error observer with disturbance estimator.
Xiao, Lingfei; Zhu, Yue
2014-07-01
For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
1974-07-01
automated manufacturing processes and a rough technoeconomic evaluation of those concepts. Our evaluation is largely based on estimates; therefore, the...must be subjected to thorough analysis and experimental verification before they can be considered definitive. They are being published at this time...hardware and sensor technology, manufacturing engineering, automation, and economic analysis . Members of this team inspected over thirty manufacturing
Estimating prevalence of osteoporosis: examples from industrialized countries.
Wade, S W; Strader, C; Fitzpatrick, L A; Anthony, M S; O'Malley, C D
2014-01-01
In nine industrialized countries in North America, Europe, Japan, and Australia, country-specific osteoporosis prevalence (estimated from published data) at the total hip or hip/spine ranged from 9 to 38 % for women and 1 to 8 % for men. In these countries, osteoporosis affects up to 49 million individuals. Standardized country-specific prevalence estimates are scarce, limiting our ability to anticipate the potential global impact of osteoporosis. This study estimated the prevalence of osteoporosis in several industrialized countries (USA, Canada, five European countries, Australia, and Japan) using the World Health Organization (WHO) bone mineral density (BMD)-based definition of osteoporosis: BMD T-score assessed by dual-energy x-ray absorptiometry ≤-2.5. Osteoporosis prevalence was estimated for males and females aged 50 years and above using total hip BMD and then either total hip or spine BMD. We compiled published location-specific data, using the National Health and Nutrition Examination Survey (NHANES) III age and BMD reference groups, and adjusted for differences in disease definitions across sources. Relevant NHANES III ratios (e.g., male to female osteoporosis at the total hip) were applied where data were missing for countries outside the USA. Data were extrapolated from geographically similar countries as needed. Population counts for 2010 were used to estimate the number of individuals with osteoporosis in each country. For females, osteoporosis prevalence ranged from 9 % (UK) to 15 % (France and Germany) based on total hip BMD and from 16 % (USA) to 38 % (Japan) when spine BMD data were included. For males, prevalence ranged from 1 % (UK) to 4 % (Japan) based on total hip BMD and from 3 % (Canada) to 8 % (France, Germany, Italy, and Spain) when spine BMD data were included. Up to 49 million individuals met the WHO osteoporosis criteria in a number of industrialized countries in North America, Europe, Japan, and Australia.
Nair, Harish; Nokes, D James; Gessner, Bradford D; Dherani, Mukesh; Madhi, Shabir A; Singleton, Rosalyn J; O'Brien, Katherine L; Roca, Anna; Wright, Peter F; Bruce, Nigel; Chandran, Aruna; Theodoratou, Evropi; Sutanto, Agustinus; Sedyaningsih, Endang R; Ngama, Mwanajuma; Munywoki, Patrick K; Kartasasmita, Cissy; Simões, Eric AF; Rudan, Igor; Weber, Martin W; Campbell, Harry
2010-01-01
Summary Background The global burden of disease attributable to respiratory syncytial virus (RSV) remains unknown. We aimed to estimate the global incidence of and mortality from episodes of acute lower respiratory infection (ALRI) due to RSV in children younger than 5 years in 2005. Methods We estimated the incidence of RSV-associated ALRI in children younger than 5 years, stratified by age, using data from a systematic review of studies published between January, 1995, and June, 2009, and ten unpublished population-based studies. We estimated possible boundaries for RSV-associated ALRI mortality by combining case fatality ratios with incidence estimates from hospital-based reports from published and unpublished studies and identifying studies with population-based data for RSV seasonality and monthly ALRI mortality. Findings In 2005, an estimated 33·8 (95% CI 19·3–46·2) million new episodes of RSV-associated ALRI occurred worldwide in children younger than 5 years (22% of ALRI episodes), with at least 3·4 (2·8–4·3) million episodes representing severe RSV-associated ALRI necessitating hospital admission. We estimated that 66 000–199 000 children younger than 5 years died from RSV-associated ALRI in 2005, with 99% of these deaths occurring in developing countries. Incidence and mortality can vary substantially from year to year in any one setting. Interpretation Globally, RSV is the most common cause of childhood ALRI and a major cause of admission to hospital as a result of severe ALRI. Mortality data suggest that RSV is an important cause of death in childhood from ALRI, after pneumococcal pneumonia and Haemophilus influenzae type b. The development of novel prevention and treatment strategies should be accelerated as a priority. Funding WHO; Bill & Melinda Gates Foundation. PMID:20399493
Saatkamp, Arne; Affre, Laurence; Dutoit, Thierry; Poschlod, Peter
2009-09-01
Seed survival in the soil contributes to population persistence and community diversity, creating a need for reliable measures of soil seed bank persistence. Several methods estimate soil seed bank persistence, most of which count seedlings emerging from soil samples. Seasonality, depth distribution and presence (or absence) in vegetation are then used to classify a species' soil seed bank into persistent or transient, often synthesized into a longevity index. This study aims to determine if counts of seedlings from soil samples yield reliable seed bank persistence estimates and if this is correlated to seed production. Seeds of 38 annual weeds taken from arable fields were buried in the field and their viability tested by germination and tetrazolium tests at 6 month intervals for 2.5 years. This direct measure of soil seed survival was compared with indirect estimates from the literature, which use seedling emergence from soil samples to determine seed bank persistence. Published databases were used to explore the generality of the influence of reproductive capacity on seed bank persistence estimates from seedling emergence data. There was no relationship between a species' soil seed survival in the burial experiment and its seed bank persistence estimate from published data using seedling emergence from soil samples. The analysis of complementary data from published databases revealed that while seed bank persistence estimates based on seedling emergence from soil samples are generally correlated with seed production, estimates of seed banks from burial experiments are not. The results can be explained in terms of the seed size-seed number trade-off, which suggests that the higher number of smaller seeds is compensated after germination. Soil seed bank persistence estimates correlated to seed production are therefore not useful for studies on population persistence or community diversity. Confusion of soil seed survival and seed production can be avoided by separate use of soil seed abundance and experimental soil seed survival.
Grosse, Scott D; Berry, Robert J; Mick Tilford, J; Kucik, James E; Waitzman, Norman J
2016-05-01
Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997-1998. Estimates of annual numbers of live-born spina bifida cases in 1995-1996 relative to 1999-2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. Published by Elsevier Inc.
Zhang, Han; Wheeler, William; Song, Lei; Yu, Kai
2017-07-07
As meta-analysis results published by consortia of genome-wide association studies (GWASs) become increasingly available, many association summary statistics-based multi-locus tests have been developed to jointly evaluate multiple single-nucleotide polymorphisms (SNPs) to reveal novel genetic architectures of various complex traits. The validity of these approaches relies on the accurate estimate of z-score correlations at considered SNPs, which in turn requires knowledge on the set of SNPs assessed by each study participating in the meta-analysis. However, this exact SNP coverage information is usually unavailable from the meta-analysis results published by GWAS consortia. In the absence of the coverage information, researchers typically estimate the z-score correlations by making oversimplified coverage assumptions. We show through real studies that such a practice can generate highly inflated type I errors, and we demonstrate the proper way to incorporate correct coverage information into multi-locus analyses. We advocate that consortia should make SNP coverage information available when posting their meta-analysis results, and that investigators who develop analytic tools for joint analyses based on summary data should pay attention to the variation in SNP coverage and adjust for it appropriately. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.
External Validation of Early Weight Loss Nomograms for Exclusively Breastfed Newborns.
Schaefer, Eric W; Flaherman, Valerie J; Kuzniewicz, Michael W; Li, Sherian X; Walsh, Eileen M; Paul, Ian M
2015-12-01
Nomograms that show hour-by-hour percentiles of weight loss during the birth hospitalization were recently developed to aid clinical care of breastfeeding newborns. The nomograms for breastfed neonates were based on a sample of 108,907 newborns delivered at 14 Kaiser Permanente medical centers in Northern California (United States). The objective of this study was to externally validate the published nomograms for newborn weight loss using data from a geographically distinct population. Data were compiled from the Penn State Milton S. Hershey Medical Center located in Hershey, PA. For singleton neonates delivered at ≥36 weeks of gestation between January 2013 and September 2014, weights were obtained between 6 hours and 48 hours (vaginal delivery) or 60 hours (cesarean delivery) for neonates who were exclusively breastfeeding. Quantile regression methods appropriate for repeated measures were used to estimate 50th, 75th, 90th, and 95th percentiles of weight loss as a function of time after birth. These percentile estimates were compared with the published nomograms. Of the 1,587 newborns who met inclusion criteria, 1,148 were delivered vaginally, and 439 were delivered via cesarean section. These newborns contributed 1,815 weights for vaginal deliveries (1.6 per newborn) and 893 weights for cesarean deliveries (2.0 per newborn). Percentile estimates from this Penn State sample were similar to the published nomograms. Deviations in percentile estimates for the Penn State sample were similar to deviations observed after fitting the same model separately to each medical center that made up the Kaiser Permanente sample. The published newborn weight loss nomograms for breastfed neonates were externally validated in a geographically distinct population.
Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation
NASA Astrophysics Data System (ADS)
Medina, H.; Mutu, R.
2017-07-01
An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.
Estimating the absolute wealth of households.
Hruschka, Daniel J; Gerkey, Drew; Hadley, Craig
2015-07-01
To estimate the absolute wealth of households using data from demographic and health surveys. We developed a new metric, the absolute wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated absolute wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of absolute versus relative wealth estimates for the prediction of anthropometric measures. The median absolute wealth estimates of 1,403,186 households were 2056 international dollars per capita (interquartile range: 723-6103). The proportion of poor households based on absolute wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R(2) = 0.84). Absolute wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Absolute wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality.
Estimating the absolute wealth of households
Gerkey, Drew; Hadley, Craig
2015-01-01
Abstract Objective To estimate the absolute wealth of households using data from demographic and health surveys. Methods We developed a new metric, the absolute wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated absolute wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of absolute versus relative wealth estimates for the prediction of anthropometric measures. Findings The median absolute wealth estimates of 1 403 186 households were 2056 international dollars per capita (interquartile range: 723–6103). The proportion of poor households based on absolute wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R2 = 0.84). Absolute wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Conclusion Absolute wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality. PMID:26170506
Burden of serious fungal infections in Bangladesh.
Gugnani, H C; Denning, D W; Rahim, R; Sadat, A; Belal, M; Mahbub, M S
2017-06-01
In Bangladesh there are several published papers on superficial mycoses. Deep mycoses are also recognized as an important emerging problem. Here, we estimate the annual incidence and prevalence of serious fungal infections in Bangladesh. Demographic data were obtained from world population reports and the data on TB and HIV extracted from the online publications on tuberculosis in Bangladesh and Asia Pacific research statistical data information resources AIDS Data HUB. All the published papers on fungal infections in Bangladesh were identified through extensive search of literature. We estimated the number of affected people from populations at risk and local epidemiological data. Bangladesh has a population of ∼162.6 million, 31% children and only 6% over the age of 60 years. The pulmonary TB caseload reported in 2014 was 119,520, and we estimate a prevalence of 30,178 people with chronic pulmonary aspergillosis, 80% attributable to TB. An anticipated 90,262 and 119,146 patients have allergic bronchopulmonary aspergillosis or severe asthma with fungal sensitization. Only 8,000 people are estimated to be HIV-infected, of whom 2900 are not on ART with a CD4 count <350 μL, Pneumocystis pneumonia and cryptococcal meningitis being rare. Superficial mycoses are very common with Trichophyton rubrum as the predominant etiological agent (80.6%). Numerous cases of mycotic keratitis have been reported from several parts of Bangladesh. Candida bloodstream infection was estimated based on a 5 per 100,000 rate (8100 cases) and invasive aspergillosis based primarily on leukemia and COPD rates, at 5166 cases. Histoplasmosis was documented in 16 cases mostly with disseminated disease and presumed in 21 with HIV infection. This study constitutes the first attempt to estimate the burden of several types of serious fungal infections in Bangladesh.
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-04-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.
Phylesystem: a git-based data store for community-curated phylogenetic estimates.
McTavish, Emily Jane; Hinchliff, Cody E; Allman, James F; Brown, Joseph W; Cranston, Karen A; Holder, Mark T; Rees, Jonathan A; Smith, Stephen A
2015-09-01
Phylogenetic estimates from published studies can be archived using general platforms like Dryad (Vision, 2010) or TreeBASE (Sanderson et al., 1994). Such services fulfill a crucial role in ensuring transparency and reproducibility in phylogenetic research. However, digital tree data files often require some editing (e.g. rerooting) to improve the accuracy and reusability of the phylogenetic statements. Furthermore, establishing the mapping between tip labels used in a tree and taxa in a single common taxonomy dramatically improves the ability of other researchers to reuse phylogenetic estimates. As the process of curating a published phylogenetic estimate is not error-free, retaining a full record of the provenance of edits to a tree is crucial for openness, allowing editors to receive credit for their work and making errors introduced during curation easier to correct. Here, we report the development of software infrastructure to support the open curation of phylogenetic data by the community of biologists. The backend of the system provides an interface for the standard database operations of creating, reading, updating and deleting records by making commits to a git repository. The record of the history of edits to a tree is preserved by git's version control features. Hosting this data store on GitHub (http://github.com/) provides open access to the data store using tools familiar to many developers. We have deployed a server running the 'phylesystem-api', which wraps the interactions with git and GitHub. The Open Tree of Life project has also developed and deployed a JavaScript application that uses the phylesystem-api and other web services to enable input and curation of published phylogenetic statements. Source code for the web service layer is available at https://github.com/OpenTreeOfLife/phylesystem-api. The data store can be cloned from: https://github.com/OpenTreeOfLife/phylesystem. A web application that uses the phylesystem web services is deployed at http://tree.opentreeoflife.org/curator. Code for that tool is available from https://github.com/OpenTreeOfLife/opentree. mtholder@gmail.com. © The Author 2015. Published by Oxford University Press.
Lin, Mei; Zhang, Xingyou; Holt, James B; Robison, Valerie; Li, Chien-Hsun; Griffin, Susan O
2018-06-01
Because conducting population-based oral health screening is resource intensive, oral health data at small-area levels (e.g., county-level) are not commonly available. We applied the multilevel logistic regression and poststratification method to estimate county-level prevalence of untreated dental caries among children aged 6-9years in the United States using data from the National Health and Nutrition Examination Survey (NHANES) 2005-2010 linked with various area-level data at census tract, county and state levels. We validated model-based national estimates against direct estimates from NHANES. We also compared model-based estimates with direct estimates from select State Oral Health Surveys (SOHS) at state and county levels. The model with individual-level covariates only and the model with individual-, census tract- and county-level covariates explained 7.2% and 96.3% respectively of overall county-level variation in untreated caries. Model-based county-level prevalence estimates ranged from 4.9% to 65.2% with median of 22.1%. The model-based national estimate (19.9%) matched the NHANES direct estimate (19.8%). We found significantly positive correlations between model-based estimates for 8-year-olds and direct estimates from the third-grade State Oral Health Surveys (SOHS) at state level for 34 states (Pearson coefficient: 0.54, P=0.001) and SOHS estimates at county level for 53 New York counties (Pearson coefficient: 0.38, P=0.006). This methodology could be a useful tool to characterize county-level disparities in untreated dental caries among children aged 6-9years and complement oral health surveillance to inform public health programs especially when local-level data are not available although the lack of external validation due to data unavailability should be acknowledged. Published by Elsevier Inc.
Cryer, Colin; Miller, Ted R; Lyons, Ronan A; Macpherson, Alison K; Pérez, Katherine; Petridou, Eleni Th; Dessypris, Nick; Davie, Gabrielle S; Gulliver, Pauline J; Lauritsen, Jens; Boufous, Soufiane; Lawrence, Bruce; de Graaf, Brandon; Steiner, Claudia A
2017-02-01
Governments wish to compare their performance in preventing serious injury. International comparisons based on hospital inpatient records are typically contaminated by variations in health services utilisation. To reduce these effects, a serious injury case definition has been proposed based on diagnoses with a high probability of inpatient admission (PrA). The aim of this paper was to identify diagnoses with estimated high PrA for selected developed countries. The study population was injured persons of all ages who attended emergency department (ED) for their injury in regions of Canada, Denmark, Greece, Spain and the USA. International Classification of Diseases (ICD)-9 or ICD-10 4-digit/character injury diagnosis-specific ED attendance and inpatient admission counts were provided, based on a common protocol. Diagnosis-specific and region-specific PrAs with 95% CIs were calculated. The results confirmed that femoral fractures have high PrA across all countries studied. Strong evidence for high PrA also exists for fracture of base of skull with cerebral laceration and contusion; intracranial haemorrhage; open fracture of radius, ulna, tibia and fibula; pneumohaemothorax and injury to the liver and spleen. Slightly weaker evidence exists for cerebellar or brain stem laceration; closed fracture of the tibia and fibula; open and closed fracture of the ankle; haemothorax and injury to the heart and lung. Using a large study size, we identified injury diagnoses with high estimated PrAs. These diagnoses can be used as the basis for more valid international comparisons of life-threatening injury, based on hospital discharge data, for countries with well-developed healthcare and data collection systems. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Morman, S.A.; Plumlee, G.S.; Smith, D.B.
2009-01-01
In vitro bioaccessibility tests (IVBA) are inexpensive, physiologically-based extraction tests designed to estimate the bioaccessibility of elements along ingestion exposure pathways. Published IVBA protocols call for the testing to be done on the Pb > Ni > As > Cr.
USDA-ARS?s Scientific Manuscript database
Biocontrol measures may enhance postharvest interventions, however; published research on process-based models for biocontrol of foodborne pathogens on produce is limited. The aim of this research was to develop cost model estimates for competitive exclusion process using Pseudomonas fluorescens and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-18
...] General Services Administration Acquisition Regulation; Submission for OMB Review; Zero Burden Information... regarding zero burden information collection reports. A notice was published in the Federal Register at 76... our estimate of the public burden of this collection of information is accurate and based on valid...
The cost of vision loss in Canada. 1. Methodology.
Gordon, Keith D; Cruess, Alan F; Bellan, Lorne; Mitchell, Scott; Pezzullo, M Lynne
2011-08-01
This paper outlines the methodology used to estimate the cost of vision loss in Canada. The results of this study will be presented in a second paper. The cost of vision loss (VL) in Canada was estimated using a prevalence-based approach. This was done by estimating the number of people with VL in a base period (2007) and the costs associated with treating them. The cost estimates included direct health system expenditures on eye conditions that cause VL, as well as other indirect financial costs such as productivity losses. Estimates were also made of the value of the loss of healthy life, measured in Disability Adjusted Life Years or DALY's. To estimate the number of cases of VL in the population, epidemiological data on prevalence rates were applied to population data. The number of cases of VL was stratified by gender, age, ethnicity, severity and cause. The following sources were used for estimating prevalence: Population-based eye studies; Canadian Surveys; Canadian journal articles and research studies; and International Population Based Eye Studies. Direct health costs were obtained primarily from Health Canada and Canadian Institute for Health Information (CIHI) sources, while costs associated with productivity losses were based on employment information compiled by Statistics Canada and on economic theory of productivity loss. Costs related to vision rehabilitation (VR) were obtained from Canadian VR organizations. This study shows that it is possible to estimate the costs for VL for a country in the absence of ongoing local epidemiological studies. Copyright © 2011 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.
Yang, Wenlun; Fu, Minyue
2017-11-01
Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Do subitizing deficits in developmental dyscalculia involve pattern recognition weakness?
Ashkenazi, Sarit; Mark-Zigdon, Nitza; Henik, Avishai
2013-01-01
The abilities of children diagnosed with developmental dyscalculia (DD) were examined in two types of object enumeration: subitizing, and small estimation (5-9 dots). Subitizing is usually defined as a fast and accurate assessment of a number of small dots (range 1 to 4 dots), and estimation is an imprecise process to assess a large number of items (range 5 dots or more). Based on reaction time (RT) and accuracy analysis, our results indicated a deficit in the subitizing and small estimation range among DD participants in relation to controls. There are indications that subitizing is based on pattern recognition, thus presenting dots in a canonical shape in the estimation range should result in a subitizing-like pattern. In line with this theory, our control group presented a subitizing-like pattern in the small estimation range for canonically arranged dots, whereas the DD participants presented a deficit in the estimation of canonically arranged dots. The present finding indicates that pattern recognition difficulties may play a significant role in both subitizing and subitizing deficits among those with DD. © 2012 Blackwell Publishing Ltd.
Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf
2016-06-30
In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Acuff, Shelley N.; Neveu, Melissa L.; Syed, Mumtaz; Kaman, Austin D.; Fu, Yitong
2018-01-01
Purpose The usage of PET/computed tomography (CT) to monitor hepatocellular carcinoma patients following yttrium-90 (90Y) radioembolization has increased. Respiratory motion causes liver movement, which can be corrected using gating techniques at the expense of added noise. This work examines the use of amplitude-based gating on 90Y-PET/CT and its potential impact on diagnostic integrity. Patients and methods Patients were imaged using PET/CT following 90Y radioembolization. A respiratory band was used to collect respiratory cycle data. Patient data were processed as both standard and motion-corrected images. Regions of interest were drawn and compared using three methods. Activity concentrations were calculated and converted into dose estimates using previously determined and published scaling factors. Diagnostic assessments were performed using a binary scale created from published 90Y-PET/CT image interpretation guidelines. Results Estimates of radiation dose were increased (P<0.05) when using amplitude-gating methods with 90Y PET/CT imaging. Motion-corrected images show increased noise, but the diagnostic determination of success, using the Kao criteria, did not change between static and motion-corrected data. Conclusion Amplitude-gated PET/CT following 90Y radioembolization is feasible and may improve 90Y dose estimates while maintaining diagnostic assessment integrity. PMID:29351124
Galileo FOC Satellite Group Delay Estimation based on Raw Method and published IOV Metadata
NASA Astrophysics Data System (ADS)
Reckeweg, Florian; Schönemann, Erik; Springer, Tim; Enderle, Werner
2017-04-01
In December 2016, the European GNSS Agency (GSA) published the Galileo In-Orbit Validation (IOV) satellite metadata. These metadata include among others the so-called Galileo satellite group delays, which were measured in an absolute sense by the satellite manufacturer on-ground for all three Galileo frequency bands E1, E5 and E6. Therewith Galileo is the first Global Navigation Satellite System (GNSS) for which absolute calibration values for satellite on-board group delays have been published. The different satellite group delays for the three frequency bands lead to the fact that the signals will not be transmitted at exactly the same epoch. Up to now, due to the lack of absolute group delays, it is common practice in GNSS analyses to estimate and apply the differences of these satellite group delays, commonly known as differential code biases (DCBs). However, this has the drawback that the determination of the "raw" clock and the absolute ionosphere is not possible. The use of absolute bias calibrations for satellites and receivers is a major step into the direction of more realistic (in a physical sense) clock and atmosphere estimates. The Navigation Support Office at the European Space Operation Centre (ESOC) was from the beginning involved in the validation process of the Galileo metadata. For the work presented in this presentation we will use the absolute bias calibrations of the Galileo IOV satellites to estimate and validate the absolute receiver group delays of the ESOC GNSS network and vice versa. The receiver group delays have exemplarily been calibrated in a calibration campaign with an IFEN GNSS Signal-Simulator at ESOC. Based on the calibrated network, making use of the ionosphere constraints given by the IOV satellites, GNSS raw observations are processed to estimate satellite group delays for the operational Galileo (Full Operational Capability) FOC satellites. In addition, "raw" satellite clock offsets are estimated, which are free of the ionosphere-free bias, which is inherent to all common satellite clock products, generated with the standard ionosphere-free linear combination processing approach. In the raw observation processing method, developed by the Navigation Support Office at ESOC, no differences or linear combinations of GNSS observations are formed and ionosphere parameters and multi-signal group delay parameters can be jointly estimated by making use of all available code and phase observations on multiple frequencies.
The cost-effectiveness of rosacea treatments.
Thomas, Kristen; Yelverton, Christopher B; Yentzer, Brad A; Balkrishnan, Rajesh; Fleischer, Alan B; Feldman, Steven R
2009-01-01
Topical and oral antibiotic/anti-inflammatory agents are mainstays of therapy for rosacea. However, costs and efficacies of these therapies vary widely. To determine relative cost-effectiveness of common therapeutic regimens using published data. Average daily costs (ADC) were determined based on treatment frequency and estimated gram usage for facial application of topical regimens of metronidazole (0.75%, 1%), azelaic acid (15%, 20%), sodium sulfacetamide and sulfur 10%/5%, and oral regimens of tetracycline, doxycycline, and isotretinoin. The ADC was compared with published efficacy rates from clinical trials, with efforts to standardize outcome measures. Based on these efficacy rates, costs per success were calculated and combined with office visit costs to estimate the total cost for each treatment for a 15-week period. The medication cost per treatment success of topical regimens ranged from $60.90 ($205.40, total, including office visits) for metronidazole 1% gel once daily, to $152.25 ($296.75, total) for azelaic acid 20% cream twice daily. Tetracycline 250 mg/day was the least costly oral agent at $6.30 per treatment success, or $150.80 total. Based on our best assessments of retrospective data from the literature, metronidazole 1% gel, once daily, was considerably less costly than several other branded and generic alternatives.
Dosimetric variations due to interfraction organ deformation in cervical cancer brachytherapy.
Kobayashi, Kazuma; Murakami, Naoya; Wakita, Akihisa; Nakamura, Satoshi; Okamoto, Hiroyuki; Umezawa, Rei; Takahashi, Kana; Inaba, Koji; Igaki, Hiroshi; Ito, Yoshinori; Shigematsu, Naoyuki; Itami, Jun
2015-12-01
We quantitatively estimated dosimetric variations due to interfraction organ deformation in multi-fractionated high-dose-rate brachytherapy (HDRBT) for cervical cancer using a novel surface-based non-rigid deformable registration. As the number of consecutive HDRBT fractions increased, simple addition of dose-volume histogram parameters significantly overestimated the dose, compared with distribution-based dose addition. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Sammour, T; Lewis, M; Thomas, M L; Lawrence, M J; Hunter, A; Moore, J W
2017-01-01
Anastomotic leak can be a devastating complication, and early prediction is difficult. The aim of this study is to prospectively validate a simple anastomotic leak risk calculator and compare its predictive value with the estimate of the primary operating surgeon. Consecutive patients undergoing elective or emergency colon cancer surgery with a primary anastomosis over a 1-year period were prospectively included. A recently published anastomotic leak risk nomogram was converted to an online calculator ( www.anastomoticleak.com ). The calculator-derived risk of anastomotic leak and the risk estimated by the primary operating surgeon were recorded at the completion of surgery. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve analysis (AUROC) was performed for both risk estimates. A total of 105 patients were screened for inclusion during the study period, of whom 83 met the inclusion criteria. The overall anastomotic leak rate was 9.6%. The anastomotic leak calculator was highly predictive of anastomotic leak (AUROC 0.84, P = 0.002), whereas the surgeon estimate was not predictive (AUROC 0.40, P = 0.243). A simple anastomotic leak risk calculator is significantly better at predicting anastomotic leak than the estimate of the primary surgeon. Further external validation on a larger data set is required.
Prevalence of autosomal dominant polycystic kidney disease in the European Union.
Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S
2017-08-01
Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.
Tankeu, Aurel T; Bigna, Jean Joel R; Nansseu, Jobert Richie N; Aminde, Leopold Ndemnge; Danwang, Celestin; Temgoua, Mazou N; Noubiap, Jean Jacques N
2017-02-14
Congenital heart diseases (CHD) are common causes of cardiovascular morbidity and mortality among young children and adolescents living in Africa. Accurate epidemiological data are needed in order to evaluate and improve preventive strategies. This review aims to determine the prevalence of CHD and their main patterns in Africa. This systematic review and meta-analysis will include cross-sectional, case-control and cohort studies of populations residing inside African countries, which have reported the prevalence of CHD, confirmed by an echocardiographic examination and/or describing different patterns of these abnormalities in Africa. Relevant abstracts published without language restriction from 1 January 1986 to 31 December 2016 will be searched in PubMed, Exerpta Medica Database and online African journals as well as references of included articles and relevant reviews. Two review authors will independently screen, select studies, extract data and assess the risk of bias in each study. The study-specific estimates will be pooled through a random-effects meta-analysis model to obtain an overall summary estimate of the prevalence of CHD across studies. Clinical and statistical heterogeneity will be assessed, and we will pool studies judged to be clinically homogeneous. On the other hand, statistical heterogeneity will be evaluated by the χ2 test on Cochrane's Q statistic. Funnel-plots analysis and Egger's test will be used to detect publication bias. Results will be presented by geographic region (central, eastern, northern, southern and western Africa). The current study will be based on published data, and thus ethical approval is not required. This systematic review and meta-analysis is expected to serve as a base which could help in estimating and evaluating the burden of these abnormalities on the African continent. The final report of this study will be published in a peer-reviewed journal. PROSPERO CRD42016052880. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Beaulieu, Jeremy M; O'Meara, Brian C; Crane, Peter; Donoghue, Michael J
2015-09-01
Dating analyses based on molecular data imply that crown angiosperms existed in the Triassic, long before their undisputed appearance in the fossil record in the Early Cretaceous. Following a re-analysis of the age of angiosperms using updated sequences and fossil calibrations, we use a series of simulations to explore the possibility that the older age estimates are a consequence of (i) major shifts in the rate of sequence evolution near the base of the angiosperms and/or (ii) the representative taxon sampling strategy employed in such studies. We show that both of these factors do tend to yield substantially older age estimates. These analyses do not prove that younger age estimates based on the fossil record are correct, but they do suggest caution in accepting the older age estimates obtained using current relaxed-clock methods. Although we have focused here on the angiosperms, we suspect that these results will shed light on dating discrepancies in other major clades. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš
2017-05-01
The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan
2014-08-01
Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation:[Formula: see text]Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h(-1), were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h(-1)/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h(-1)/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.
NASA Astrophysics Data System (ADS)
Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan
2014-08-01
Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation: Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h-1, were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h-1/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h-1/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.
On experimental damage localization by SP2E: Application of H∞ estimation and oblique projections
NASA Astrophysics Data System (ADS)
Lenzen, Armin; Vollmering, Max
2018-05-01
In this article experimental damage localization based on H∞ estimation and state projection estimation error (SP2E) is studied. Based on an introduced difference process, a state space representation is derived for advantageous numerical solvability. Because real structural excitations are presumed to be unknown, a general input is applied therein, which allows synchronization and normalization. Furthermore, state projections are introduced to enhance damage identification. While first experiments to verify method SP2E have already been conducted and published, further laboratory results are analyzed here. Therefore, SP2E is used to experimentally localize stiffness degradations and mass alterations. Furthermore, the influence of projection techniques is analyzed. In summary, method SP2E is able to localize structural alterations, which has been observed by results of laboratory experiments.
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Kim, Eunjoo; Tani, Kotaro; Kunishima, Naoaki; Kurihara, Osamu; Sakai, Kazuo; Akashi, Makoto
2016-11-01
Estimating the early internal doses to residents in the Fukushima Daiichi Nuclear Power Station accident is a difficult task because limited human/environmental measurement data are available. Hence, the feasibility of using atmospheric dispersion simulations created by the Worldwide version of System for Prediction of Environmental Emergency Dose Information 2nd Version (WSPEEDI-II) in the estimation was examined in the present study. This examination was done by comparing the internal doses evaluated based on the human measurements with those calculated using time series air concentration maps ( 131 I and 137 Cs) generated by WSPEEDI-II. The results showed that the latter doses were several times higher than the former doses. However, this discrepancy could be minimised by taking into account personal behaviour data that will be available soon. This article also presents the development of a prototype system for estimating the internal dose based on the simulations. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Amazon plant diversity revealed by a taxonomically verified species list.
Cardoso, Domingos; Särkinen, Tiina; Alexander, Sara; Amorim, André M; Bittrich, Volker; Celis, Marcela; Daly, Douglas C; Fiaschi, Pedro; Funk, Vicki A; Giacomin, Leandro L; Goldenberg, Renato; Heiden, Gustavo; Iganci, João; Kelloff, Carol L; Knapp, Sandra; Cavalcante de Lima, Haroldo; Machado, Anderson F P; Dos Santos, Rubens Manoel; Mello-Silva, Renato; Michelangeli, Fabián A; Mitchell, John; Moonlight, Peter; de Moraes, Pedro Luís Rodrigues; Mori, Scott A; Nunes, Teonildes Sacramento; Pennington, Terry D; Pirani, José Rubens; Prance, Ghillean T; de Queiroz, Luciano Paganucci; Rapini, Alessandro; Riina, Ricarda; Rincon, Carlos Alberto Vargas; Roque, Nádia; Shimizu, Gustavo; Sobral, Marcos; Stehmann, João Renato; Stevens, Warren D; Taylor, Charlotte M; Trovó, Marcelo; van den Berg, Cássio; van der Werff, Henk; Viana, Pedro Lage; Zartman, Charles E; Forzza, Rafaela Campostrini
2017-10-03
Recent debates on the number of plant species in the vast lowland rain forests of the Amazon have been based largely on model estimates, neglecting published checklists based on verified voucher data. Here we collate taxonomically verified checklists to present a list of seed plant species from lowland Amazon rain forests. Our list comprises 14,003 species, of which 6,727 are trees. These figures are similar to estimates derived from nonparametric ecological models, but they contrast strongly with predictions of much higher tree diversity derived from parametric models. Based on the known proportion of tree species in neotropical lowland rain forest communities as measured in complete plot censuses, and on overall estimates of seed plant diversity in Brazil and in the neotropics in general, it is more likely that tree diversity in the Amazon is closer to the lower estimates derived from nonparametric models. Much remains unknown about Amazonian plant diversity, but this taxonomically verified dataset provides a valid starting point for macroecological and evolutionary studies aimed at understanding the origin, evolution, and ecology of the exceptional biodiversity of Amazonian forests.
Sun, Zhijian; Zhang, Guoqing; Lu, Yu; Zhang, Weidong
2018-01-01
This paper studies the leader-follower formation control of underactuated surface vehicles with model uncertainties and environmental disturbances. A parameter estimation and upper bound estimation based sliding mode control scheme is proposed to solve the problem of the unknown plant parameters and environmental disturbances. For each of these leader-follower formation systems, the dynamic equations of position and attitude are analyzed using coordinate transformation with the aid of the backstepping technique. All the variables are guaranteed to be uniformly ultimately bounded stable in the closed-loop system, which is proven by the distribution design Lyapunov function synthesis. The main advantages of this approach are that: first, parameter estimation based sliding mode control can enhance the robustness of the closed-loop system in presence of model uncertainties and environmental disturbances; second, a continuous function is developed to replace the signum function in the design of sliding mode scheme, which devotes to reduce the chattering of the control system. Finally, numerical simulations are given to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Amazon plant diversity revealed by a taxonomically verified species list
Cardoso, Domingos; Särkinen, Tiina; Alexander, Sara; Amorim, André M.; Bittrich, Volker; Celis, Marcela; Daly, Douglas C.; Fiaschi, Pedro; Funk, Vicki A.; Giacomin, Leandro L.; Heiden, Gustavo; Iganci, João; Kelloff, Carol L.; Knapp, Sandra; Cavalcante de Lima, Haroldo; Machado, Anderson F. P.; dos Santos, Rubens Manoel; Mello-Silva, Renato; Michelangeli, Fabián A.; Mitchell, John; Moonlight, Peter; de Moraes, Pedro Luís Rodrigues; Mori, Scott A.; Nunes, Teonildes Sacramento; Pennington, Terry D.; Pirani, José Rubens; Prance, Ghillean T.; de Queiroz, Luciano Paganucci; Rapini, Alessandro; Rincon, Carlos Alberto Vargas; Roque, Nádia; Shimizu, Gustavo; Sobral, Marcos; Stehmann, João Renato; Stevens, Warren D.; Taylor, Charlotte M.; Trovó, Marcelo; van den Berg, Cássio; van der Werff, Henk; Viana, Pedro Lage; Zartman, Charles E.; Forzza, Rafaela Campostrini
2017-01-01
Recent debates on the number of plant species in the vast lowland rain forests of the Amazon have been based largely on model estimates, neglecting published checklists based on verified voucher data. Here we collate taxonomically verified checklists to present a list of seed plant species from lowland Amazon rain forests. Our list comprises 14,003 species, of which 6,727 are trees. These figures are similar to estimates derived from nonparametric ecological models, but they contrast strongly with predictions of much higher tree diversity derived from parametric models. Based on the known proportion of tree species in neotropical lowland rain forest communities as measured in complete plot censuses, and on overall estimates of seed plant diversity in Brazil and in the neotropics in general, it is more likely that tree diversity in the Amazon is closer to the lower estimates derived from nonparametric models. Much remains unknown about Amazonian plant diversity, but this taxonomically verified dataset provides a valid starting point for macroecological and evolutionary studies aimed at understanding the origin, evolution, and ecology of the exceptional biodiversity of Amazonian forests. PMID:28923966
DOE Office of Scientific and Technical Information (OSTI.GOV)
2008-01-15
The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates served within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercialmore » firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.« less
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
Takada, Kenta; Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji
2018-01-01
The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Annual estimates of water and solute export from 42 tributaries to the Yukon River
Frederick Zanden,; Suzanne P. Anderson,; Striegl, Robert G.
2012-01-01
Annual export of 11 major and trace solutes for the Yukon River is found to be accurately determined based on summing 42 tributary contributions. These findings provide the first published estimates of tributary specific distribution of solutes within the Yukon River basin. First, we show that annual discharge of the Yukon River can be computed by summing calculated annual discharges from 42 tributaries. Annual discharge for the tributaries is calculated from the basin area and average annual precipitation over that area using a previously published regional regression equation. Based on tributary inputs, we estimate an average annual discharge for the Yukon River of 210 km3 year–1. This value is within 1% of the average measured annual discharge at the U.S. Geological Survey gaging station near the river terminus at Pilot Station, AK, for water years 2001 through 2005. Next, annual loads for 11 solutes are determined by combining annual discharge with point measurements of solute concentrations in tributary river water. Based on the sum of solutes in tributary water, we find that the Yukon River discharges approximately 33 million metric tons of dissolved solids each year at Pilot Station. Discharged solutes are dominated by cations calcium and magnesium (5.65 × 109 and 1.42 × 109 g year–1) and anions bicarbonate and sulphate (17.3 × 109 and 5.40 × 109 g year–1). These loads compare well with loads calculated independently at the three continuous gaging stations along the Yukon River. These findings show how annual solute yields vary throughout a major subarctic river basin and that accurate estimates of total river export can be determined from calculated tributary contributions.
Accelerometer-based measures in physical activity surveillance: current practices and issues.
Pedišić, Željko; Bauman, Adrian
2015-02-01
Self-reports of physical activity (PA) have been the mainstay of measurement in most non-communicable disease (NCD) surveillance systems. To these, other measures are added to summate to a comprehensive PA surveillance system. Recently, some national NCD surveillance systems have started using accelerometers as a measure of PA. The purpose of this paper was specifically to appraise the suitability and role of accelerometers for population-level PA surveillance. A thorough literature search was conducted to examine aspects of the generalisability, reliability, validity, comprehensiveness and between-study comparability of accelerometer estimates, and to gauge the simplicity, cost-effectiveness, adaptability and sustainability of their use in NCD surveillance. Accelerometer data collected in PA surveillance systems may not provide estimates that are generalisable to the target population. Accelerometer-based estimates have adequate reliability for PA surveillance, but there are still several issues associated with their validity. Accelerometer-based prevalence estimates are largely dependent on the investigators' choice of intensity cut-off points. Maintaining standardised accelerometer data collections in long-term PA surveillance systems is difficult, which may cause discontinuity in time-trend data. The use of accelerometers does not necessarily produce useful between-study and international comparisons due to lack of standardisation of data collection and processing methods. To conclude, it appears that accelerometers still have limitations regarding generalisability, validity, comprehensiveness, simplicity, affordability, adaptability, between-study comparability and sustainability. Therefore, given the current evidence, it seems that the widespread adoption of accelerometers specifically for large-scale PA surveillance systems may be premature. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Anderson, G F; Han, K C; Miller, R H; Johns, M E
1997-01-01
OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613
Assessing the quality of life history information in publicly available databases.
Thorson, James T; Cope, Jason M; Patrick, Wesley S
2014-01-01
Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.
Reconciling divergent estimates of oil and gas methane emissions
Zavala-Araiza, Daniel; Lyon, David R.; Alvarez, Ramón A.; Davis, Kenneth J.; Harriss, Robert; Herndon, Scott C.; Karion, Anna; Kort, Eric Adam; Lamb, Brian K.; Lan, Xin; Marchese, Anthony J.; Pacala, Stephen W.; Robinson, Allen L.; Shepson, Paul B.; Sweeney, Colm; Talbot, Robert; Townsend-Small, Amy; Yacovitch, Tara I.; Zimmerle, Daniel J.; Hamburg, Steven P.
2015-01-01
Published estimates of methane emissions from atmospheric data (top-down approaches) exceed those from source-based inventories (bottom-up approaches), leading to conflicting claims about the climate implications of fuel switching from coal or petroleum to natural gas. Based on data from a coordinated campaign in the Barnett Shale oil and gas-producing region of Texas, we find that top-down and bottom-up estimates of both total and fossil methane emissions agree within statistical confidence intervals (relative differences are 10% for fossil methane and 0.1% for total methane). We reduced uncertainty in top-down estimates by using repeated mass balance measurements, as well as ethane as a fingerprint for source attribution. Similarly, our bottom-up estimate incorporates a more complete count of facilities than past inventories, which omitted a significant number of major sources, and more effectively accounts for the influence of large emission sources using a statistical estimator that integrates observations from multiple ground-based measurement datasets. Two percent of oil and gas facilities in the Barnett accounts for half of methane emissions at any given time, and high-emitting facilities appear to be spatiotemporally variable. Measured oil and gas methane emissions are 90% larger than estimates based on the US Environmental Protection Agency’s Greenhouse Gas Inventory and correspond to 1.5% of natural gas production. This rate of methane loss increases the 20-y climate impacts of natural gas consumed in the region by roughly 50%. PMID:26644584
A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.
Tipton, Elizabeth; Shuster, Jonathan
2017-10-15
Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Theodoratou, Evropi; Zhang, Jian Shayne F.; Kolcic, Ivana; Davis, Andrew M.; Bhopal, Sunil; Nair, Harish; Chan, Kit Yee; Liu, Li; Johnson, Hope; Rudan, Igor; Campbell, Harry
2011-01-01
Background Pneumonia is the leading cause of child deaths globally. The aims of this study were to: a) estimate the number and global distribution of pneumonia deaths for children 1–59 months for 2008 for countries with low (<85%) or no coverage of death certification using single-cause regression models and b) compare these country estimates with recently published ones based on multi-cause regression models. Methods and Findings For 35 low child-mortality countries with <85% coverage of death certification, a regression model based on vital registration data of low child-mortality and >85% coverage of death certification countries was used. For 87 high child-mortality countries pneumonia death estimates were obtained by applying a regression model developed from published and unpublished verbal autopsy data from high child-mortality settings. The total number of 1–59 months pneumonia deaths for the year 2008 for these 122 countries was estimated to be 1.18 M (95% CI 0.77 M–1.80 M), which represented 23.27% (95% CI 17.15%–32.75%) of all 1–59 month child deaths. The country level estimation correlation coefficient between these two methods was 0.40. Interpretation Although the overall number of post-neonatal pneumonia deaths was similar irrespective to the method of estimation used, the country estimate correlation coefficient was low, and therefore country-specific estimates should be interpreted with caution. Pneumonia remains the leading cause of child deaths and is greatest in regions of poverty and high child-mortality. Despite the concerns about gender inequity linked with childhood mortality we could not estimate sex-specific pneumonia mortality rates due to the inadequate data. Life-saving interventions effective in preventing and treating pneumonia mortality exist but few children in high pneumonia disease burden regions are able to access them. To achieve the United Nations Millennium Development Goal 4 target to reduce child deaths by two-thirds in year 2015 will require the scale-up of access to these effective pneumonia interventions. PMID:21966425
Ha, Jaehyeok; Kim, Soo-Geun; Paek, Domyung; Park, Jungsun
2011-03-01
Ischemic heart disease (IHD) is a major cause of death in Korea and known to result from several occupational factors. This study attempted to estimate the current magnitude of IHD mortality due to occupational factors in Korea. After selecting occupational risk factors by literature investigation, we calculated attributable fractions (AFs) from relative risks and exposure data for each factor. Relative risks were estimated using meta-analysis based on published research. Exposure data were collected from the 2006 Survey of Korean Working Conditions. Finally, we estimated 2006 occupation-related IHD mortality. FOR THE FACTORS CONSIDERED, WE ESTIMATED THE FOLLOWING RELATIVE RISKS: noise 1.06, environmental tobacco smoke 1.19 (men) and 1.22 (women), shift work 1.12, and low job control 1.15 (men) and 1.08 (women). Combined AFs of those factors in the IHD were estimated at 9.29% (0.3-18.51%) in men and 5.78% (-7.05-19.15%) in women. Based on these fractions, Korea's 2006 death toll from occupational IHD between the age of 15 and 69 was calculated at 353 in men (total 3,804) and 72 in women (total 1,246). We estimated occupational IHD mortality of Korea with updated data and more relevant evidence. Despite the efforts to obtain reliable estimates, there were many assumptions and limitations that must be overcome. Future research based on more precise design and reliable evidence is required for more accurate estimates.
Pohl, Heiko; Pech, Oliver; Arash, Haris; Stolte, Manfred; Manner, Hendrik; May, Andrea; Kraywinkel, Klaus; Sonnenberg, Amnon; Ell, Christian
2016-02-01
Although it is well understood that the risk of oesophageal adenocarcinoma increases with Barrett length, transition risks for cancer associated with different Barrett lengths are unknown. We aimed to estimate annual cancer transition rates for patients with long-segment (≥3 cm), short-segment (≥1 to <3 cm) and ultra-short-segment (<1 cm) Barrett's oesophagus. We used three data sources to estimate the annual cancer transition rates for each Barrett length category: (1) the distribution of long, short and ultra-short Barrett's oesophagus among a large German cohort with newly diagnosed T1 oesophageal adenocarcinoma; (2) population-based German incidence of oesophageal adenocarcinoma; and (3) published estimates of the population prevalence of Barrett's oesophagus for each Barrett length category. Among 1017 patients with newly diagnosed T1 oesophageal adenocarcinoma, 573 (56%) had long-segment, 240 (24%) short-segment and 204 (20%) ultra-short-segment Barrett's oesophagus. The base-case estimates for the prevalence of Barrett's oesophagus among the general population were 1.5%, 5% and 14%, respectively. The annual cancer transition rates for patients with long, short and ultra-short Barrett's oesophagus were 0.22%, 0.03% and 0.01%, respectively. To detect one cancer, 450 patients with long-segment Barrett's oesophagus would need to undergo annual surveillance endoscopy; in short segment and ultra-short segment, the corresponding numbers of patients would be 3440 and 12,364. Similar results were obtained when applying US incidence data. The large number of patients, who need to undergo endoscopic surveillance to detect one cancer, raises questions about the value of surveillance endoscopy in patients with short segment or ultra-short segment of Barrett's oesophagus. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Export Earnings from the Overseas Student Industry: How Much?
ERIC Educational Resources Information Center
Birrell, Bob; Smith, T. Fred
2010-01-01
Education is regularly publicised as Australia's third-largest export behind coal and iron ore. Although it cannot be disputed that education is a major export, the published figures are inflated because of three broad factors. First, estimates of student expenditure on goods and services in Australia are based on students with different…
Richard A. Birdsey; William H. McWilliams
1986-01-01
The forest inventory and analysis unit of the southern forest experiment stations (Forest Survey) conducts periodic inventories at approximately 10-year intervals of the forest resources of the Midsouth States (fig. 1). This report contains a summary of forest acreage estimates made between 1950 and 1985. The statistics are based on published forest survey reports and...
Can Readability Formulas Be Used to Successfully Gauge Difficulty of Reading Materials?
ERIC Educational Resources Information Center
Begeny, John C.; Greene, Diana J.
2014-01-01
A grade level of reading material is commonly estimated using one or more readability formulas, which purport to measure text difficulty based on specified text characteristics. However, there is limited direction for teachers and publishers regarding which readability formulas (if any) are appropriate indicators of actual text difficulty. Because…
Improving the accuracy of the gradient method for determining soil carbon dioxide efflux
USDA-ARS?s Scientific Manuscript database
Continuous soil CO2 efflux (Fsoil) estimates can be obtained by the gradient method (GM), but the utility of the method is hindered by uncertainties in the application of published models for the diffusion coefficient (Ds). We compared two in-situ methods for determining Ds, one based calibrating th...
Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory
NASA Astrophysics Data System (ADS)
Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng
The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.
Fischer, H Felix; Rose, Matthias
2016-10-19
Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.
Estimation of Greenland's Ice Sheet Mass Balance Using ICESat and GRACE Data
NASA Astrophysics Data System (ADS)
Slobbe, D.; Ditmar, P.; Lindenbergh, R.
2007-12-01
Data of the GRACE gravity mission and the ICESat laser altimetry mission are used to create two independent estimates of Greenland's ice sheet mass balance over the full measurement period. For ICESat data, a processing strategy is developed using the elevation differences of geometrically overlapping footprints of both crossing and repeated tracks. The dataset is cleaned using quality flags defined by the GLAS science team. The cleaned dataset reveals some strong, spatially correlated signals that are shown to be related to physical phenomena. Different processing strategies are used to convert the observed temporal height differences to mass changes for 6 different drainage systems, further divided into a region above and below 2000 meter elevation. The results are compared with other altimetry based mass balance estimates. In general, the obtained results confirm trends discovered by others, but we also show that the choice of processing strategy strongly influences our results, especially for the areas below 2000 meter. Furthermore, GRACE based monthly variations of the Earth's gravity field as processed by CNES, CSR, GFZ and DEOS are used to estimate the mass balance change for North and South Greenland. It is shown that our results are comparable with recently published GRACE estimates (mascon solutions). On the other hand, the estimates based on GRACE data are only partly confirmed by the ICESat estimates. Possible explanations for the obvious differences will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Kong, Weidong; Brundage, Michael
Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT{sub 1Y}) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT{sub LIFETIME}) was estimated by the multicohort utilization tablemore » method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT{sub 1Y}, observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT{sub LIFETIME} was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT{sub LIFETIME} for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT{sub LIFETIME} for 5 selected sites differed widely. RT{sub LIFETIME} in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario.« less
Austin, Peter C
2018-05-20
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G-computation. All methods resulted in essentially unbiased estimation of the population dose-response function. However, GPS-based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Handling of thermal paper: Implications for dermal exposure to bisphenol A and its alternatives
Bernier, Meghan R.
2017-01-01
Bisphenol A (BPA) is an endocrine disrupting chemical used in a wide range of consumer products including photoactive dyes used in thermal paper. Recent studies have shown that dermal absorption of BPA can occur when handling these papers. Yet, regulatory agencies have largely dismissed thermal paper as a major source of BPA exposure. Exposure estimates provided by agencies such as the European Food Safety Authority (EFSA) are based on assumptions about how humans interact with this material, stating that ‘typical’ exposures for adults involve only one handling per day for short periods of time (<1 minute), with limited exposure surfaces (three fingertips). The objective of this study was to determine how individuals handle thermal paper in one common setting: a cafeteria providing short-order meals. We observed thermal paper handling in a college-aged population (n = 698 subjects) at the University of Massachusetts’ dining facility. We find that in this setting, individuals handle receipts for an average of 11.5 min, that >30% of individuals hold thermal paper with more than three fingertips, and >60% allow the paper to touch their palm. Only 11% of the participants we observed were consistent with the EFSA model for time of contact and dermal surface area. Mathematical modeling based on handling times we measured and previously published transfer coefficients, concentrations of BPA in paper, and absorption factors indicate the most conservative estimated intake from handling thermal paper in this population is 51.1 ng/kg/day, similar to EFSA’s estimates of 59 ng/kg/day from dermal exposures. Less conservative estimates, using published data on concentrations in thermal paper and transfer rates to skin, indicate that exposures are likely significantly higher. Based on our observational data, we propose that the current models for estimating dermal BPA exposures are not consistent with normal human behavior and should be reevaluated. PMID:28570582
Handling of thermal paper: Implications for dermal exposure to bisphenol A and its alternatives.
Bernier, Meghan R; Vandenberg, Laura N
2017-01-01
Bisphenol A (BPA) is an endocrine disrupting chemical used in a wide range of consumer products including photoactive dyes used in thermal paper. Recent studies have shown that dermal absorption of BPA can occur when handling these papers. Yet, regulatory agencies have largely dismissed thermal paper as a major source of BPA exposure. Exposure estimates provided by agencies such as the European Food Safety Authority (EFSA) are based on assumptions about how humans interact with this material, stating that 'typical' exposures for adults involve only one handling per day for short periods of time (<1 minute), with limited exposure surfaces (three fingertips). The objective of this study was to determine how individuals handle thermal paper in one common setting: a cafeteria providing short-order meals. We observed thermal paper handling in a college-aged population (n = 698 subjects) at the University of Massachusetts' dining facility. We find that in this setting, individuals handle receipts for an average of 11.5 min, that >30% of individuals hold thermal paper with more than three fingertips, and >60% allow the paper to touch their palm. Only 11% of the participants we observed were consistent with the EFSA model for time of contact and dermal surface area. Mathematical modeling based on handling times we measured and previously published transfer coefficients, concentrations of BPA in paper, and absorption factors indicate the most conservative estimated intake from handling thermal paper in this population is 51.1 ng/kg/day, similar to EFSA's estimates of 59 ng/kg/day from dermal exposures. Less conservative estimates, using published data on concentrations in thermal paper and transfer rates to skin, indicate that exposures are likely significantly higher. Based on our observational data, we propose that the current models for estimating dermal BPA exposures are not consistent with normal human behavior and should be reevaluated.
Review: Prevalence and dynamics of Helicobacter pylori infection during childhood.
Zabala Torrres, Beatriz; Lucero, Yalda; Lagomarcino, Anne J; Orellana-Manzano, Andrea; George, Sergio; Torres, Juan P; O'Ryan, Miguel
2017-10-01
Long-term persistent Helicobacter pylori infection has been associated with ulceropeptic disease and gastric cancer. Although H. pylori is predominantly acquired early in life, a clear understanding of infection dynamics during childhood has been obfuscated by the diversity of populations evaluated, study designs, and methods used. Update understanding of true prevalence of H. pylori infection during childhood, based on a critical analysis of the literature published in the past 5 years. Comprehensive review and meta-analysis of original studies published from 2011 to 2016. A MEDLINE ® /PubMed ® search on May 1, 2016, using the terms pylori and children, and subsequent exclusion, based on abstract review using predefined criteria, resulted in 261 citations. An Embase ® search with the same criteria added an additional 8 citations. In healthy children, meta-analysis estimated an overall seroprevalence rate of 33% (95% CI: 27%-38%). Seven healthy cohort studies using noninvasive direct detection methods showed infection prevalence estimates ranging from 20% to 50% in children ≤5 and 38% to 79% in children >5 years. The probability of infection persistence after a first positive sample ranged from 49% to 95%. Model estimates of cross-sectional direct detection studies in asymptomatic children indicated a prevalence of 37% (95% CI: 30%-44%). Seroprevalence, but not direct detection rates increased with age; both decreased with increasing income. The model estimate based on cross-sectional studies in symptomatic children was 39% (95% CI: 35%-43%). The prevalence of H. pylori infection varied widely in the studies included here; nevertheless, model estimates by detection type were similar, suggesting that overall, one-third of children worldwide are or have been infected. The few cohort and longitudinal studies available show variability, but most studies, show infection rates over 30%. Rather surprisingly, overall infection prevalence in symptomatic children was only slightly higher, around 40%. Studies including only one positive stool sample should be interpreted with caution as spontaneous clearance can occur. © 2017 John Wiley & Sons Ltd.
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J
2013-07-01
Estimation of age of an individual can be performed by evaluating the pattern of dental development. A dataset for age estimation based on the dental maturity of a French-Canadian population was published over 35 years ago and has become the most widely accepted dataset. The applicability of this dataset has been tested on different population groups. To estimate the observed differences between Chronological age (CA) and Dental age (DA) when the French Canadian dataset was used to estimate the age of different population groups. A systematic search of literature for papers utilizing the French Canadian dataset for age estimation was performed. All language articles from PubMed, Embase and Cochrane databases were electronically searched for terms 'Demirjian' and 'Dental age' published between January 1973 and December 2011. A hand search of articles was also conducted. A total of 274 studies were identified from which 34 studies were included for qualitative analysis and 12 studies were included for quantitative assessment and meta-analysis. When synthesizing the estimation results from different population groups, on average, the Demirjian dataset overestimated the age of females by 0.65 years (-0.10 years to +2.82 years) and males by 0.60 years (-0.23 years to +3.04 years). The French Canadian dataset overestimates the age of the subjects by more than six months and hence this dataset should be used only with considerable caution when estimating age of group of subjects of any global population. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
A joint sparse representation-based method for double-trial evoked potentials estimation.
Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing
2013-12-01
In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.
Adams, Elisabeth J; Ehrlich, Alice; Turner, Katherine M E; Shah, Kunj; Macleod, John; Goldenberg, Simon; Meray, Robin K; Pearce, Vikki; Horner, Patrick
2014-07-23
We aimed to explore patient pathways using a chlamydia/gonorrhoea point-of-care (POC) nucleic acid amplification test (NAAT), and estimate and compare the costs of the proposed POC pathways with the current pathways using standard laboratory-based NAAT testing. Workshops were conducted with healthcare professionals at four sexual health clinics representing diverse models of care in the UK. They mapped out current pathways that used chlamydia/gonorrhoea tests, and constructed new pathways using a POC NAAT. Healthcare professionals' time was assessed in each pathway. The proposed POC pathways were then priced using a model built in Microsoft Excel, and compared to previously published costs for pathways using standard NAAT-based testing in an off-site laboratory. Pathways using a POC NAAT for asymptomatic and symptomatic patients and chlamydia/gonorrhoea-only tests were shorter and less expensive than most of the current pathways. Notably, we estimate that POC testing as part of a sexual health screen for symptomatic patients, or as stand-alone chlamydia/gonorrhoea testing, could reduce costs per patient by as much as £16 or £6, respectively. In both cases, healthcare professionals' time would be reduced by approximately 10 min per patient. POC testing for chlamydia/gonorrhoea in a clinical setting may reduce costs and clinician time, and may lead to more appropriate and quicker care for patients. Further study is warranted on how to best implement POC testing in clinics, and on the broader clinical and cost implications of this technology. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Grosse, Scott D; Chaugule, Shraddha S; Hay, Joel W
2015-01-01
Estimates of preference-weighted health outcomes or health state utilities are needed to assess improvements in health in terms of quality-adjusted life-years. Gains in quality-adjusted life-years are used to assess the cost–effectiveness of prophylactic use of clotting factor compared with on-demand treatment among people with hemophilia, a congenital bleeding disorder. Published estimates of health utilities for people with hemophilia vary, contributing to uncertainty in the estimates of cost–effectiveness of prophylaxis. Challenges in estimating utility weights for the purpose of evaluating hemophilia treatment include selection bias in observational data, difficulty in adjusting for predictors of health-related quality of life and lack of preference-based data comparing adults with lifetime or primary prophylaxis versus no prophylaxis living within the same country and healthcare system. PMID:25585817
In search of a corrected prescription drug elasticity estimate: a meta-regression approach.
Gemmill, Marin C; Costa-Font, Joan; McGuire, Alistair
2007-06-01
An understanding of the relationship between cost sharing and drug consumption depends on consistent and unbiased price elasticity estimates. However, there is wide heterogeneity among studies, which constrains the applicability of elasticity estimates for empirical purposes and policy simulation. This paper attempts to provide a corrected measure of the drug price elasticity by employing meta-regression analysis (MRA). The results indicate that the elasticity estimates are significantly different from zero, and the corrected elasticity is -0.209 when the results are made robust to heteroskedasticity and clustering of observations. Elasticity values are higher when the study was published in an economic journal, when the study employed a greater number of observations, and when the study used aggregate data. Elasticity estimates are lower when the institutional setting was a tax-based health insurance system.
Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.
Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi
2018-03-15
Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.
The cost of Alzheimer's disease in China and re-estimation of costs worldwide.
Jia, Jianping; Wei, Cuibai; Chen, Shuoqi; Li, Fangyu; Tang, Yi; Qin, Wei; Zhao, Lina; Jin, Hongmei; Xu, Hui; Wang, Fen; Zhou, Aihong; Zuo, Xiumei; Wu, Liyong; Han, Ying; Han, Yue; Huang, Liyuan; Wang, Qi; Li, Dan; Chu, Changbiao; Shi, Lu; Gong, Min; Du, Yifeng; Zhang, Jiewen; Zhang, Junjian; Zhou, Chunkui; Lv, Jihui; Lv, Yang; Xie, Haiqun; Ji, Yong; Li, Fang; Yu, Enyan; Luo, Benyan; Wang, Yanjiang; Yang, Shanshan; Qu, Qiumin; Guo, Qihao; Liang, Furu; Zhang, Jintao; Tan, Lan; Shen, Lu; Zhang, Kunnan; Zhang, Jinbiao; Peng, Dantao; Tang, Muni; Lv, Peiyuan; Fang, Boyan; Chu, Lan; Jia, Longfei; Gauthier, Serge
2018-04-01
The socioeconomic costs of Alzheimer's disease (AD) in China and its impact on global economic burden remain uncertain. We collected data from 3098 patients with AD in 81 representative centers across China and estimated AD costs for individual patient and total patients in China in 2015. Based on this data, we re-estimated the worldwide costs of AD. The annual socioeconomic cost per patient was US $19,144.36, and total costs were US $167.74 billion in 2015. The annual total costs are predicted to reach US $507.49 billion in 2030 and US $1.89 trillion in 2050. Based on our results, the global estimates of costs for dementia were US $957.56 billion in 2015, and will be US $2.54 trillion in 2030, and US $9.12 trillion in 2050, much more than the predictions by the World Alzheimer Report 2015. China bears a heavy burden of AD costs, which greatly change the estimates of AD cost worldwide. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Ding, Yao; Thompson, John D; Kobrynski, Lisa; Ojodu, Jelili; Zarbalian, Guisou; Grosse, Scott D
2016-05-01
To evaluate the expected cost-effectiveness and net benefit of the recent implementation of newborn screening (NBS) for severe combined immunodeficiency (SCID) in Washington State. We constructed a decision analysis model to estimate the costs and benefits of NBS in an annual birth cohort of 86 600 infants based on projections of avoided infant deaths. Point estimates and ranges for input variables, including the birth prevalence of SCID, proportion detected asymptomatically without screening through family history, screening test characteristics, survival rates, and costs of screening, diagnosis, and treatment were derived from published estimates, expert opinion, and the Washington NBS program. We estimated treatment costs stratified by age of identification and SCID type (with or without adenosine deaminase deficiency). Economic benefit was estimated using values of $4.2 and $9.0 million per death averted. We performed sensitivity analyses to evaluate the influence of key variables on the incremental cost-effectiveness ratio (ICER) of net direct cost per life-year saved. Our model predicts an additional 1.19 newborn infants with SCID detected preclinically through screening, in addition to those who would have been detected early through family history, and 0.40 deaths averted annually. Our base-case model suggests an ICER of $35 311 per life-year saved, and a benefit-cost ratio of either 5.31 or 2.71. Sensitivity analyses found ICER values <$100 000 and positive net benefit for plausible assumptions on all variables. Our model suggests that NBS for SCID in Washington is likely to be cost-effective and to show positive net economic benefit. Published by Elsevier Inc.
Sonawane, A U; Shirva, V K; Pradhan, A S
2010-02-01
Skin entrance doses (SEDs) were estimated by carrying out measurements of air kerma from 101 X-ray machines installed in 45 major and selected hospitals in the country by using a silicon detector-based dose Test-O-Meter. 1209 number of air kerma measurements of diagnostic projections for adults have been analysed for seven types of common diagnostic examinations, viz. chest (AP, PA, LAT), lumbar spine (AP, LAT), thoracic spine (AP, LAT), abdomen (AP), pelvis (AP), hip joints (AP) and skull (PA, LAT) for different film-screen combinations. The values of estimated diagnostic reference levels (DRLs) (third quartile values of SEDs) were compared with guidance levels/DRLs of doses published by the IAEA-BSS-Safety Series No. 115, 1996; HPA (NRPB) (2000 and 2005), UK; CRCPD/CDRH (USA), European Commission and other national values. The values of DRLs obtained in this study are comparable with the values published by the IAEA-BSS-115 (1996); HPA (NRPB) (2000 and 2005) UK; EC and CRCPD/CDRH, USA including values obtained in previous studies in India.
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Using Appendicitis to Improve Estimates of Childhood Medicaid Participation Rates.
Silber, Jeffrey H; Zeigler, Ashley E; Reiter, Joseph G; Hochman, Lauren L; Ludwig, Justin M; Wang, Wei; Calhoun, Shawna R; Pati, Susmita
2018-03-23
Administrative data are often used to estimate state Medicaid/Children's Health Insurance Program duration of enrollment and insurance continuity, but they are generally not used to estimate participation (the fraction of eligible children enrolled) because administrative data do not include reasons for disenrollment and cannot observe eligible never-enrolled children, causing estimates of eligible unenrolled to be inaccurate. Analysts are therefore forced to either utilize survey information that is not generally linkable to administrative claims or rely on duration and continuity measures derived from administrative data and forgo estimating claims-based participation. We introduce appendectomy-based participation (ABP) to estimate statewide participation rates using claims by taking advantage of a natural experiment around statewide appendicitis admissions to improve the accuracy of participation rate estimates. We used Medicaid Analytic eXtract (MAX) for 2008-2010; and the American Community Survey for 2008-2010 from 43 states to calculate ABP, continuity ratio, duration, and participation based on the American Community Survey (ACS). In the validation study, median participation rate using ABP was 86% versus 87% for ACS-based participation estimates using logical edits and 84% without logical edits. Correlations between ABP and ACS with or without logical edits was 0.86 (P < .0001). Using regression analysis, ABP alone was a significant predictor of ACS (P < .0001) with or without logical edits, and adding duration and/or the continuity ratio did not significantly improve the model. Using the ABP rate derived from administrative claims (MAX) is a valid method to estimate statewide public insurance participation rates in children. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Allemani, Claudia; Harewood, Rhea; Johnson, Christopher J; Carreira, Helena; Spika, Devon; Bonaventure, Audrey; Ward, Kevin; Weir, Hannah K; Coleman, Michel P
2017-12-15
Robust comparisons of population-based cancer survival estimates require tight adherence to the study protocol, standardized quality control, appropriate life tables of background mortality, and centralized analysis. The CONCORD program established worldwide surveillance of population-based cancer survival in 2015, analyzing individual data on 26 million patients (including 10 million US patients) diagnosed between 1995 and 2009 with 1 of 10 common malignancies. In this Cancer supplement, we analyzed data from 37 state cancer registries that participated in the second cycle of the CONCORD program (CONCORD-2), covering approximately 80% of the US population. Data quality checks were performed in 3 consecutive phases: protocol adherence, exclusions, and editorial checks. One-, 3-, and 5-year age-standardized net survival was estimated using the Pohar Perme estimator and state- and race-specific life tables of all-cause mortality for each year. The cohort approach was adopted for patients diagnosed between 2001 and 2003, and the complete approach for patients diagnosed between 2004 and 2009. Articles in this supplement report population coverage, data quality indicators, and age-standardized 5-year net survival by state, race, and stage at diagnosis. Examples of tables, bar charts, and funnel plots are provided in this article. Population-based cancer survival is a key measure of the overall effectiveness of services in providing equitable health care. The high quality of US cancer registry data, 80% population coverage, and use of an unbiased net survival estimator ensure that the survival trends reported in this supplement are robustly comparable by race and state. The results can be used by policymakers to identify and address inequities in cancer survival in each state and for the United States nationally. Cancer 2017;123:4982-93. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-07-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Variation in polyp size estimation among endoscopists and impact on surveillance intervals.
Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren
2014-10-01
Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Wittenborn, John S.; Zhang, Xinzhi; Feagan, Charles W.; Crouse, Wesley L.; Shrestha, Sundar; Kemper, Alex R.; Hoerger, Thomas J.; Saaddine, Jinan B.
2017-01-01
Objective To estimate the economic burden of vision loss and eye disorders in the United States population younger than 40 years in 2012. Design Econometric and statistical analysis of survey, commercial claims, and census data. Participants The United States population younger than 40 years in 2012. Methods We categorized costs based on consensus guidelines. We estimated medical costs attributable to diagnosed eye-related disorders, undiagnosed vision loss, and medical vision aids using Medical Expenditure Panel Survey and MarketScan data. The prevalence of vision impairment and blindness were estimated using National Health and Nutrition Examination Survey data. We estimated costs from lost productivity using Survey of Income and Program Participation. We estimated costs of informal care, low vision aids, special education, school screening, government spending, and transfer payments based on published estimates and federal budgets. We estimated quality-adjusted life years (QALYs) lost based on published utility values. Main Outcome Measures Costs and QALYs lost in 2012. Results The economic burden of vision loss and eye disorders among the United States population younger than 40 years was $27.5 billion in 2012 (95% confidence interval, $21.5–$37.2 billion), including $5.9 billion for children and $21.6 billion for adults 18 to 39 years of age. Direct costs were $14.5 billion, including $7.3 billion in medical costs for diagnosed disorders, $4.9 billion in refraction correction, $0.5 billion in medical costs for undiagnosed vision loss, and $1.8 billion in other direct costs. Indirect costs were $13 billion, primarily because of $12.2 billion in productivity losses. In addition, vision loss cost society 215 000 QALYs. Conclusions We found a substantial burden resulting from vision loss and eye disorders in the United States population younger than 40 years, a population excluded from previous studies. Monetizing quality-of-life losses at $50 000 per QALY would add $10.8 billion in additional costs, indicating a total economic burden of $38.2 billion. Relative to previously reported estimates for the population 40 years of age and older, more than one third of the total cost of vision loss and eye disorders may be incurred by persons younger than 40 years. PMID:23631946
Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel
2010-01-30
In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.
Awad, Susanne F; Chemaitelly, Hiam; Abu-Raddad, Laith J
2018-01-01
To estimate the annual risk of HIV transmission (ϕ) within HIV sero-discordant couples in 23 countries in sub-Saharan Africa (SSA), by utilizing newly available national population-based data and accounting for factors known to potentially affect this estimation. We used a recently developed pair-based mathematical model that accommodates for HIV-dynamics temporal variation, sexual risk-behavior heterogeneity, and antiretroviral therapy (ART) scale-up. Estimated country-specific ϕ (in absence of ART) ranged between 4.2% (95% uncertainty interval (UI): 1.9%-6.3%) and 47.4% (95% UI: 37.2%-69.0%) per person-year (ppy), with a median of 12.4%. ϕ was strongly associated with HIV prevalence, with a Pearson correlation coefficient of 0.92, and was larger in high- versus low-HIV-prevalence countries. ϕ increased by 1.31% (95% confidence interval: 1.00%-1.55%) ppy for every 1% increase in HIV prevalence. ϕ estimates were similar to earlier estimates, and suggested considerable heterogeneity in HIV infectiousness across SSA. This heterogeneity may explain, partly, the differences in epidemic scales. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Excel-Based Tool for Pharmacokinetically Guided Dose Adjustment of Paclitaxel.
Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich
2015-12-01
Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of <12%. The median relative deviation of the estimated Tc > 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in clinical routine.
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
Abortion and mental health: quantitative synthesis and analysis of research published 1995-2009.
Coleman, Priscilla K
2011-09-01
Given the methodological limitations of recently published qualitative reviews of abortion and mental health, a quantitative synthesis was deemed necessary to represent more accurately the published literature and to provide clarity to clinicians. To measure the association between abortion and indicators of adverse mental health, with subgroup effects calculated based on comparison groups (no abortion, unintended pregnancy delivered, pregnancy delivered) and particular outcomes. A secondary objective was to calculate population-attributable risk (PAR) statistics for each outcome. After the application of methodologically based selection criteria and extraction rules to minimise bias, the sample comprised 22 studies, 36 measures of effect and 877 181 participants (163 831 experienced an abortion). Random effects pooled odds ratios were computed using adjusted odds ratios from the original studies and PAR statistics were derived from the pooled odds ratios. Women who had undergone an abortion experienced an 81% increased risk of mental health problems, and nearly 10% of the incidence of mental health problems was shown to be attributable to abortion. The strongest subgroup estimates of increased risk occurred when abortion was compared with term pregnancy and when the outcomes pertained to substance use and suicidal behaviour. This review offers the largest quantitative estimate of mental health risks associated with abortion available in the world literature. Calling into question the conclusions from traditional reviews, the results revealed a moderate to highly increased risk of mental health problems after abortion. Consistent with the tenets of evidence-based medicine, this information should inform the delivery of abortion services.
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were usedmore » to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.« less
F. Thomas Ledig; J.L. Whitmore
1981-01-01
Caribbean pine is an important exotic being bred throughout the tropics, but published estimates are lacking for heritability of economically important traits and the genetic correlations between them. Based on a Puerto Rican trial of 16 open-pollinated parents of var. hondurensis selected in Belize, heritabilities for a number of characteristics...
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1983-01-01
The machine readable catalog is described. The machine version contains the same data as the published table, which includes a second file with the notes. The computerized data files are prepared at the Astronomical Data Center. Detected discrepancies and cluster identifications based on photometric estimators are included.
A synthetic phylogeny of freshwater crayfish: insights for conservation.
Owen, Christopher L; Bracken-Grissom, Heather; Stern, David; Crandall, Keith A
2015-02-19
Phylogenetic systematics is heading for a renaissance where we shift from considering our phylogenetic estimates as a static image in a published paper and taxonomies as a hardcopy checklist to treating both the phylogenetic estimate and dynamic taxonomies as metadata for further analyses. The Open Tree of Life project (opentreeoflife.org) is developing synthesis tools for harnessing the power of phylogenetic inference and robust taxonomy to develop a synthetic tree of life. We capitalize on this approach to estimate a synthesis tree for the freshwater crayfish. The crayfish make an exceptional group to demonstrate the utility of the synthesis approach, as there recently have been a number of phylogenetic studies on the crayfishes along with a robust underlying taxonomic framework. Importantly, the crayfish have also been extensively assessed by an IUCN Red List team and therefore have accurate and up-to-date area and conservation status data available for analysis within a phylogenetic context. Here, we develop a synthesis phylogeny for the world's freshwater crayfish and examine the phylogenetic distribution of threat. We also estimate a molecular phylogeny based on all available GenBank crayfish sequences and use this tree to estimate divergence times and test for divergence rate variation. Finally, we conduct EDGE and HEDGE analyses and identify a number of species of freshwater crayfish of highest priority in conservation efforts. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Plumb, John M.; Moffitt, Christine M.
2015-01-01
Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.
New Methodology for Estimating Fuel Economy by Vehicle Class
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling
2011-01-01
Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less
A critical review of the ESCAPE project for estimating long-term health effects of air pollution.
Lipfert, Frederick W
2017-02-01
The European Study of Cohorts for Air Pollution Effects (ESCAPE) is a13-nation study of long-term health effects of air pollution based on subjects pooled from up to 22 cohorts that were intended for other purposes. Twenty-five papers have been published on associations of various health endpoints with long-term exposures to NOx, NO2, traffic indicators, PM10, PM2.5 and PM constituents including absorbance (elemental carbon). Seven additional ESCAPE papers found moderate correlations (R2=0.3-0.8) between measured air quality and estimates based on land-use regression that were used; personal exposures were not considered. I found no project summaries or comparisons across papers; here I conflate the 25 ESCAPE findings in the context of other recent European epidemiology studies. Because one ESCAPE cohort contributed about half of the subjects, I consider it and the other 18 cohorts separately to compare their contributions to the combined risk estimates. I emphasize PM2.5 and confirm the published hazard ratio of 1.14 (1.04-1.26) per 10μg/m3 for all-cause mortality. The ESCAPE papers found 16 statistically significant (p<0.05) risks among the125 pollutant-endpoint combinations; 4 each for PM2.5 and PM10, 1 for PM absorbance, 5 for NO2, and 2 for traffic. No PM constituent was consistently significant. No significant associations were reported for cardiovascular mortality; low birthrate was significant for all pollutants except PM absorbance. Based on associations with PM2.5, I find large differences between all-cause death estimates and the sum of specific-cause death estimates. Scatterplots of PM2.5 mortality risks by cause show no consistency across the 18 cohorts, ostensibly because of the relatively few subjects. Overall, I find the ESCAPE project inconclusive and I question whether the efforts required to estimate exposures for small cohorts were worthwhile. I suggest that detailed studies of the large cohort using historical exposures and additional cardiovascular risk factors might be productive. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun
2018-02-01
Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.
Umscheid, Craig A; Mitchell, Matthew D; Doshi, Jalpa A; Agarwal, Rajender; Williams, Kendal; Brennan, Patrick J
2011-02-01
To estimate the proportion of healthcare-associated infections (HAIs) in US hospitals that are "reasonably preventable," along with their related mortality and costs. To estimate preventability of catheter-associated bloodstream infections (CABSIs), catheter-associated urinary tract infections (CAUTIs), surgical site infections (SSIs), and ventilator-associated pneumonia (VAP), we used a federally sponsored systematic review of interventions to reduce HAIs. Ranges of preventability included the lowest and highest risk reductions reported by US studies of "moderate" to "good" quality published in the last 10 years. We used the most recently published national data to determine the annual incidence of HAIs and associated mortality. To estimate incremental cost of HAIs, we performed a systematic review, which included costs from studies in general US patient populations. To calculate ranges for the annual number of preventable infections and deaths and annual costs, we multiplied our infection, mortality, and cost figures with our ranges of preventability for each HAI. As many as 65%-70% of cases of CABSI and CAUTI and 55% of cases of VAP and SSI may be preventable with current evidence-based strategies. CAUTI may be the most preventable HAI. CABSI has the highest number of preventable deaths, followed by VAP. CABSI also has the highest cost impact; costs due to preventable cases of VAP, CAUTI, and SSI are likely less. Our findings suggest that 100% prevention of HAIs may not be attainable with current evidence-based prevention strategies; however, comprehensive implementation of such strategies could prevent hundreds of thousands of HAIs and save tens of thousands of lives and billions of dollars.
Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F
2014-10-01
No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.
Using exposure prediction tools to link exposure and ...
A few different exposure prediction tools were evaluated for use in the new in vitro-based safety assessment paradigm using di-2-ethylhexyl phthalate (DEHP) and dibutyl phthalate (DnBP) as case compounds. Daily intake of each phthalate was estimated using both high-throughput (HT) prediction models such as the HT Stochastic Human Exposure and Dose Simulation model (SHEDS-HT) and the ExpoCast heuristic model and non-HT approaches based on chemical specific exposure estimations in the environment in conjunction with human exposure factors. Reverse dosimetry was performed using a published physiologically based pharmacokinetic (PBPK) model for phthalates and their metabolites to provide a comparison point. Daily intakes of DEHP and DnBP were estimated based on the urinary concentrations of their respective monoesters, mono-2-ethylhexyl phthalate (MEHP) and monobutyl phthalate (MnBP), reported in NHANES (2011–2012). The PBPK-reverse dosimetry estimated daily intakes at the 50th and 95th percentiles were 0.68 and 9.58 μg/kg/d and 0.089 and 0.68 μg/kg/d for DEHP and DnBP, respectively. For DEHP, the estimated median from PBPK-reverse dosimetry was about 3.6-fold higher than the ExpoCast estimate (0.68 and 0.18 μg/kg/d, respectively). For DnBP, the estimated median was similar to that predicted by ExpoCast (0.089 and 0.094 μg/kg/d, respectively). The SHEDS-HT prediction of DnBP intake from consumer product pathways alone was higher at 0.67 μg/kg/d. The PBPK-reve
Updated Global Burden of Cholera in Endemic Countries
Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.
2015-01-01
Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000
Zhang, Guomin; Sandanayake, Malindu; Setunge, Sujeeva; Li, Chunqing; Fang, Jun
2017-02-01
Emissions from equipment usage and transportation at the construction stage are classified as the direct emissions which include both greenhouse gas (GHG) and non-GHG emissions due to partial combustion of fuel. Unavailability of a reliable and complete inventory restricts an accurate emission evaluation on construction work. The study attempts to review emission factor standards readily available worldwide for estimating emissions from construction equipment. Emission factors published by United States Environmental Protection Agency (US EPA), Australian National Greenhouse Accounts (AUS NGA), Intergovernmental Panel on Climate Change (IPCC) and European Environmental Agency (EEA) are critically reviewed to identify their strengths and weaknesses. A selection process based on the availability and applicability is then developed to help identify the most suitable emission factor standards for estimating emissions from construction equipment in the Australian context. A case study indicates that a fuel based emission factor is more suitable for GHG emission estimation and a time based emission factor is more appropriate for estimation of non-GHG emissions. However, the selection of emission factor standards also depends on factors like the place of analysis (country of origin), data availability and the scope of analysis. Therefore, suitable modifications and assumptions should be incorporated in order to represent these factors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kalman-variant estimators for state of charge in lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Propp, Karsten; Auger, Daniel J.; Fotouhi, Abbas; Longo, Stefano; Knap, Vaclav
2017-03-01
Lithium-sulfur batteries are now commercially available, offering high specific energy density, low production costs and high safety. However, there is no commercially-available battery management system for them, and there are no published methods for determining state of charge in situ. This paper describes a study to address this gap. The properties and behaviours of lithium-sulfur are briefly introduced, and the applicability of 'standard' lithium-ion state-of-charge estimation methods is explored. Open-circuit voltage methods and 'Coulomb counting' are found to have a poor fit for lithium-sulfur, and model-based methods, particularly recursive Bayesian filters, are identified as showing strong promise. Three recursive Bayesian filters are implemented: an extended Kalman filter (EKF), an unscented Kalman filter (UKF) and a particle filter (PF). These estimators are tested through practical experimentation, considering both a pulse-discharge test and a test based on the New European Driving Cycle (NEDC). Experimentation is carried out at a constant temperature, mirroring the environment expected in the authors' target automotive application. It is shown that the estimators, which are based on a relatively simple equivalent-circuit-network model, can deliver useful results. If the three estimators implemented, the unscented Kalman filter gives the most robust and accurate performance, with an acceptable computational effort.
Kahn, James G; Muraguri, Nicholas; Harris, Brian; Lugada, Eric; Clasen, Thomas; Grabowsky, Mark; Mermin, Jonathan; Shariff, Shahnaaz
2012-01-01
Efficiently delivered interventions to reduce HIV, malaria, and diarrhea are essential to accelerating global health efforts. A 2008 community integrated prevention campaign in Western Province, Kenya, reached 47,000 individuals over 7 days, providing HIV testing and counseling, water filters, insecticide-treated bed nets, condoms, and for HIV-infected individuals cotrimoxazole prophylaxis and referral for ongoing care. We modeled the potential cost-effectiveness of a scaled-up integrated prevention campaign. We estimated averted deaths and disability-adjusted life years (DALYs) based on published data on baseline mortality and morbidity and on the protective effect of interventions, including antiretroviral therapy. We incorporate a previously estimated scaled-up campaign cost. We used published costs of medical care to estimate savings from averted illness (for all three diseases) and the added costs of initiating treatment earlier in the course of HIV disease. Per 1000 participants, projected reductions in cases of diarrhea, malaria, and HIV infection avert an estimated 16.3 deaths, 359 DALYs and $85,113 in medical care costs. Earlier care for HIV-infected persons adds an estimated 82 DALYs averted (to a total of 442), at a cost of $37,097 (reducing total averted costs to $48,015). Accounting for the estimated campaign cost of $32,000, the campaign saves an estimated $16,015 per 1000 participants. In multivariate sensitivity analyses, 83% of simulations result in net savings, and 93% in a cost per DALY averted of less than $20. A mass, rapidly implemented campaign for HIV testing, safe water, and malaria control appears economically attractive.
Sprague, Lori A.; Gronberg, Jo Ann M.
2013-01-01
Anthropogenic inputs of nitrogen and phosphorus to each county in the conterminous United States and to the watersheds of 495 surface-water sites studied as part of the U.S. Geological Survey National Water-Quality Assessment Program were quantified for the years 1992, 1997, and 2002. Estimates of inputs of nitrogen and phosphorus from biological fixation by crops (for nitrogen only), human consumption, crop production for human consumption, animal production for human consumption, animal consumption, and crop production for animal consumption for each county are provided in a tabular dataset. These county-level estimates were allocated to the watersheds of the surface-water sites to estimate watershed-level inputs from the same sources; these estimates also are provided in a tabular dataset, together with calculated estimates of net import of food and net import of feed and previously published estimates of inputs from atmospheric deposition, fertilizer, and recoverable manure. The previously published inputs are provided for each watershed so that final estimates of total anthropogenic nutrient inputs could be calculated. Estimates of total anthropogenic inputs are presented together with previously published estimates of riverine loads of total nitrogen and total phosphorus for reference.
MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.
2000-01-01
Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.
Lipp, Ilona; Murphy, Kevin; Caseras, Xavier; Wise, Richard G
2015-06-01
FMRI BOLD responses to changes in neural activity are influenced by the reactivity of the vasculature. By complementing a task-related BOLD acquisition with a vascular reactivity measure obtained through breath-holding or hypercapnia, this unwanted variance can be statistically reduced in the BOLD responses of interest. Recently, it has been suggested that vascular reactivity can also be estimated using a resting state scan. This study aimed to compare three breath-hold based analysis approaches (block design, sine-cosine regressor and CO2 regressor) and a resting state approach (CO2 regressor) to measure vascular reactivity. We tested BOLD variance explained by the model and repeatability of the measures. Fifteen healthy participants underwent a breath-hold task and a resting state scan with end-tidal CO2 being recorded during both. Vascular reactivity was defined as CO2-related BOLD percent signal change/mmHg change in CO2. Maps and regional vascular reactivity estimates showed high repeatability when the breath-hold task was used. Repeatability and variance explained by the CO2 trace regressor were lower for the resting state data based approach, which resulted in highly variable measures of vascular reactivity. We conclude that breath-hold based vascular reactivity estimations are more repeatable than resting-based estimates, and that there are limitations with replacing breath-hold scans by resting state scans for vascular reactivity assessment. Copyright © 2015. Published by Elsevier Inc.
Estimating risk reduction required to break even in a health promotion program.
Ozminkowski, Ronald J; Goetzel, Ron Z; Santoro, Jan; Saenz, Betty-Jo; Eley, Christine; Gorsky, Bob
2004-01-01
To illustrate a formula to estimate the amount of risk reduction required to break even on a corporate health promotion program. A case study design was implemented. Base year (2001) health risk and medical expenditure data from the company, along with published information on the relationships between employee demographics, health risks, and medical expenditures, were used to forecast demographics, risks, and expenditures for 2002 through 2011 and estimate the required amount of risk reduction. Motorola. 52,124 domestic employees. Demographics included age, gender, race, and job type. Health risks for 2001 were measured via health risk appraisal. Risks were noted as either high or low and related to exercise/eating habits, body weight, blood pressure, blood sugar levels, cholesterol levels, depression, stress, smoking/drinking habits, and seat belt use. Medical claims for 2001 were used to calculate medical expenditures per employee. Assuming a dollar 282 per employee program cost, Motorola employees would need to reduce their lifestyle-related health risks by 1.08% to 1.42% per year to break even on health promotion programming, depending upon the discount rate. Higher or lower program investments would change the risk reduction percentages. Employers can use information from published studies, along with their own data, to estimate the amount of risk reduction required to break even on their health promotion programs.
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
Moulard, Odile; Mehta, Jyotsna; Fryzek, Jon; Olivares, Robert; Iqbal, Usman; Mesa, Ruben A
2014-04-01
Primary myelofibrosis (PMF), essential thrombocythemia (ET), and polycythemia vera (PV) are BCR ABL-negative myeloproliferative neoplasms (MPN). Published epidemiology data are scarce, and multiple sources are needed to assess the disease burden. We assembled the most recent information available on the incidence and prevalence of myelofibrosis (MF), ET, and PV by conducting a structured and exhaustive literature review of the published peer-reviewed literature in EMBASE and by reviewing online documentation from disease registries and relevant health registries in European countries. The search was restricted to human studies written in English or French and published between January 1, 2000, and December 6, 2012. Eleven articles identified from EMBASE, three online hematology or oncology registries, and two Web-based databases or reports were used to summarize epidemiological estimates for MF, PV, and ET. The incidence rate of MF ranged from 0.1 per 100,000 per year to 1 per 100,000 per year. Among the various registries, the incidence of PV ranged from 0.4 per 100,000 per year to 2.8 per 100,000 per year, while the literature estimated the range of PV incidence to be 0.68 per 100,000 to 2.6 per 100,000 per year. The estimated incidence of ET was between 0.38 per 100,000 per year and 1.7 per 100,000 per year. While a few studies reported on the MPNs' prevalences, it is difficult to compare them as various types of prevalence were calculated (point prevalence vs. period prevalence) and standardization was made according to different populations (e.g., the world population and the European population). There is a wide variation in both prevalence and incidence estimates observed across European data sources. Carefully designed studies, with standardized definitions of MPNs and complete ascertainment of patients including both primary and secondary MFs, should be conducted so that estimates of the population aimed to receive novel treatments for these neoplasms are better understood assist public health planning and provide valuable information about the burden of illness to policy makers, funding agencies, resource planners, healthcare insurers, and pharmaceutical manufacturers. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Costello, Sadie; Picciotto, Sally; Rehkopf, David H; Eisen, Ellen A
2015-02-01
To examine gender and racial disparities in ischaemic heart disease (IHD) mortality related to metalworking fluid exposures and in the healthy worker survivor effect. A cohort of white and black men and women autoworkers in the USA was followed from 1941 to 1995 with quantitative exposure to respirable particulate matter from water-based metalworking fluids. Separate analyses used proportional hazards models and g-estimation. The HR for IHD among black men was 3.29 (95% CI 1.49 to 7.31) in the highest category of cumulative synthetic fluid exposure. The HR for IHD among white women exposed to soluble fluid reached 2.44 (95% CI 0.96 to 6.22). However, no increased risk was observed among white men until we corrected for the healthy worker survivor effect. Results from g-estimation indicate that if white male cases exposed to soluble or synthetic fluid had been unexposed to that fluid type, then 1.59 and 1.20 years of life would have been saved on average, respectively. We leveraged the strengths of two different analytic approaches to examine the IHD risks of metalworking fluids. All workers may have the same aetiological risk; however, black and female workers may experience more IHD from water-based metalworking fluid exposure because of a steeper exposure-response or weaker healthy worker survivor effect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Stoklosa, Michal; Ross, Hana
2014-05-01
To compare two different methods for estimating the size of the illicit cigarette market with each other and to contrast the estimates obtained by these two methods with the results of an industry-commissioned study. We used two observational methods: collection of data from packs in smokers' personal possession, and collection of data from packs discarded on streets. The data were obtained in Warsaw, Poland in September 2011 and October 2011. We used tests of independence to compare the results based on the two methods, and to contrast those with the estimate from the industry-commissioned discarded pack collection conducted in September 2011. We found that the proportions of cigarette packs classified as not intended for the Polish market estimated by our two methods were not statistically different. These estimates were 14.6% (95% CI 10.8% to 19.4%) using the survey data (N=400) and 15.6% (95% CI 13.2% to 18.4%) using the discarded pack data (N=754). The industry estimate (22.9%) was higher by nearly a half compared with our estimates, and this difference is statistically significant. Our findings are consistent with previous evidence of the tobacco industry exaggerating the scope of illicit trade and with the general pattern of the industry manipulating evidence to mislead the debate on tobacco control policy in many countries. Collaboration between governments and the tobacco industry to estimate tobacco tax avoidance and evasion is likely to produce upward-biased estimates of illicit cigarette trade. If governments are presented with industry estimates, they should strictly require a disclosure of all methodological details and data used in generating these estimates, and should seek advice from independent experts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Costs And Savings Associated With Community Water Fluoridation In The United States.
O'Connell, Joan; Rockell, Jennifer; Ouellet, Judith; Tomar, Scott L; Maas, William
2016-12-01
The most comprehensive study of US community water fluoridation program benefits and costs was published in 2001. This study provides updated estimates using an economic model that includes recent data on program costs, dental caries increments, and dental treatments. In 2013 more than 211 million people had access to fluoridated water through community water systems serving 1,000 or more people. Savings associated with dental caries averted in 2013 as a result of fluoridation were estimated to be $32.19 per capita for this population. Based on 2013 estimated costs ($324 million), net savings (savings minus costs) from fluoridation systems were estimated to be $6,469 million and the estimated return on investment, 20.0. While communities should assess their specific costs for continuing or implementing a fluoridation program, these updated findings indicate that program savings are likely to exceed costs. Project HOPE—The People-to-People Health Foundation, Inc.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
McGowan, C.P.; Millspaugh, J.J.; Ryan, M.R.; Kruse, C.D.; Pavelka, G.
2009-01-01
Estimating reproductive success for birds with precocial young can be difficult because chicks leave nests soon after hatching and individuals or broods can be difficult to track. Researchers often turn to estimating survival during the prefledging period and, though effective, mark-recapture based approaches are not always feasible due to cost, time, and animal welfare concerns. Using a threatened population of Piping Plovers (Charadrius melodus) that breeds along the Missouri River, we present an approach for estimating chick survival during the prefledging period using long-term (1993-2005), count-based, age-class data. We used a modified catch-curve analysis, and data collected during three 5-day sampling periods near the middle of the breeding season. The approach has several ecological and statistical assumptions and our analyses were designed to minimize the probability of violating those assumptions. For example, limiting the sampling periods to only 5 days gave reasonable assurance that population size was stable during the sampling period. Annual daily survival estimates ranged from 0.825 (SD = 0.03) to 0.931 (0.02) depending on year and sampling period, with these estimates assuming constant survival during the prefledging period and no change in the age structure of the population. The average probability of survival to fledging ranged from 0.126 to 0.188. Our results are similar to other published estimates for this species in similar habitats. This method of estimating chick survival may be useful for a variety of precocial bird species when mark-recapture methods are not feasible and only count-based age class data are available. ?? 2009 Association of Field Ornithologists.
Air pollution as a risk factor in health impact assessments of a travel mode shift towards cycling
Raza, Wasif; Forsberg, Bertil; Johansson, Christer; Sommar, Johan Nilsson
2018-01-01
ABSTRACT Background: Promotion of active commuting provides substantial health and environmental benefits by influencing air pollution, physical activity, accidents, and noise. However, studies evaluating intervention and policies on a mode shift from motorized transport to cycling have estimated health impacts with varying validity and precision. Objective: To review and discuss the estimation of air pollution exposure and its impacts in health impact assessment studies of a shift in transport from cars to bicycles in order to guide future assessments. Methods: A systematic database search of PubMed was done primarily for articles published from January 2000 to May 2016 according to PRISMA guidelines. Results: We identified 18 studies of health impact assessment of change in transport mode. Most studies investigated future hypothetical scenarios of increased cycling. The impact on the general population was estimated using a comparative risk assessment approach in the majority of these studies, whereas some used previously published cost estimates. Air pollution exposure during cycling was estimated based on the ventilation rate, the pollutant concentration, and the trip duration. Most studies employed exposure-response functions from studies comparing background levels of fine particles between cities to estimate the health impacts of local traffic emissions. The effect of air pollution associated with increased cycling contributed small health benefits for the general population, and also only slightly increased risks associated with fine particle exposure among those who shifted to cycling. However, studies calculating health impacts based on exposure-response functions for ozone, black carbon or nitrogen oxides found larger effects attributed to changes in air pollution exposure. Conclusion: A large discrepancy between studies was observed due to different health impact assessment approaches, different assumptions for calculation of inhaled dose and different selection of dose-response functions. This kind of assessments would improve from more holistic approaches using more specific exposure-response functions. PMID:29400262
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Spline-based procedures for dose-finding studies with active control
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-01
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Estimated generic prices for novel treatments for drug-resistant tuberculosis.
Gotham, Dzintars; Fortunak, Joseph; Pozniak, Anton; Khoo, Saye; Cooke, Graham; Nytko, Frederick E; Hill, Andrew
2017-04-01
The estimated worldwide annual incidence of MDR-TB is 480 000, representing 5% of TB incidence, but 20% of mortality. Multiple drugs have recently been developed or repurposed for the treatment of MDR-TB. Currently, treatment for MDR-TB costs thousands of dollars per course. To estimate generic prices for novel TB drugs that would be achievable given large-scale competitive manufacture. Prices for linezolid, moxifloxacin and clofazimine were estimated based on per-kilogram prices of the active pharmaceutical ingredient (API). Other costs were added, including formulation, packaging and a profit margin. The projected costs for sutezolid were estimated to be equivalent to those for linezolid, based on chemical similarity. Generic prices for bedaquiline, delamanid and pretomanid were estimated by assessing routes of synthesis, costs/kg of chemical reagents, routes of synthesis and per-step yields. Costing algorithms reflected variable regulatory requirements and efficiency of scale based on demand, and were validated by testing predictive ability against widely available TB medicines. Estimated generic prices were US$8-$17/month for bedaquiline, $5-$16/month for delamanid, $11-$34/month for pretomanid, $4-$9/month for linezolid, $4-$9/month for sutezolid, $4-$11/month for clofazimine and $4-$8/month for moxifloxacin. The estimated generic prices were 87%-94% lower than the current lowest available prices for bedaquiline, 95%-98% for delamanid and 94%-97% for linezolid. Estimated generic prices were $168-$395 per course for the STREAM trial modified Bangladesh regimens (current costs $734-$1799), $53-$276 for pretomanid-based three-drug regimens and $238-$507 for a delamanid-based four-drug regimen. Competitive large-scale generic manufacture could allow supplies of treatment for 5-10 times more MDR-TB cases within current procurement budgets. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jones, Alan M.; Harrison, Roy M.
Emission factors for particle number in three size ranges (11-30; 30-100 and >100 nm) as well as for PM 2.5, PM 2.5-10 and PM 10 mass have been estimated separately for heavy and light-duty vehicles in a heavily trafficked street canyon in London where traffic speeds vary considerably over short distances. Emissions of NO x were estimated from published emission factors, and emissions of other pollutants estimated from their ratio to NO x in the roadside concentration after subtraction of the simultaneously measured urban background. The estimated emission factors are compared with other published data. Despite many differences in the design and implementation of the various studies, the results for particulate matter are broadly similar. Estimates of particle number emissions in this study for light-duty vehicles are very close to other published data, whilst those for heavy-duty vehicles are lower than in the more comparable studies. It is suggested that a contributory factor may be the introduction of diesel particle oxidation traps on some of the bus fleet in London. Estimates of emission factors for particle mass (PM 2.5 and PM 2.5-10) are within the range of other published data, and total mass emissions estimated from the ratio of concentration to NO x are tolerably close to those estimated using emission factors from the National Atmospheric Emissions Inventory (NAEI). However, the method leads to an estimate of carbon monoxide emissions 3-6 times larger than that derived using the NAEI factors.
2014-01-01
Background There are many potential causes of sudden and severe headache (thunderclap headache), the most important of which is aneurysmal subarachnoid haemorrhage. Published academic reviews report a wide range of causes. We sought to create a definitive list of causes, other than aneurysmal subarachnoid haemorrhage, using a systematic review. Methods Systematic Review of EMBASE and MEDLINE databases using pre-defined search criteria up to September 2009. We extracted data from any original research paper or case report describing a case of someone presenting with a sudden and severe headache, and summarized the published causes. Results Our search identified over 21,000 titles, of which 1224 articles were scrutinized in full. 213 articles described 2345 people with sudden and severe headache, and we identified 6 English language academic review articles. A total of 119 causes were identified, of which 46 (38%) were not mentioned in published academic review articles. Using capture-recapture analysis, we estimate that our search was 98% complete. There is only one population-based estimate of the incidence of sudden and severe headache at 43 cases per 100,000. In cohort studies, the most common causes identified were primary headaches or headaches of uncertain cause. Vasoconstriction syndromes are commonly mentioned in case reports or case series. The most common cause not mentioned in academic reviews was pneumocephalus. 70 non-English language articles were identified but these did not contain additional causes. Conclusions There are over 100 different published causes of sudden and severe headache, other than aneurysmal subarachnoid haemorrhage. We have now made a definitive list of causes for future reference which we intend to maintain. There is a need for an up to date population based description of cause of sudden and severe headache as the modern epidemiology of thunderclap headache may require updating in the light of research on cerebral vasoconstriction syndromes. PMID:25123846
Henk, Henry J; Li, Xiaoyan; Becker, Laura K; Xu, Hairong; Gong, Qi; Deeter, Robert G; Barron, Richard L
2015-01-01
To examine the impact of research design on results in two published comparative effectiveness studies. Guidelines for comparative effectiveness research have recommended incorporating disease process in study design. Based on the recommendations, we develop a checklist of considerations and apply the checklist in review of two published studies on comparative effectiveness of colony-stimulating factors. Both studies used similar administrative claims data, but different methods, which resulted in directionally different estimates. Major design differences between the two studies include: whether the timing of intervention in disease process was identified and whether study cohort and outcome assessment period were defined based on this temporal relationship. Disease process and timing of intervention should be incorporated into the design of comparative effectiveness studies.
The ranking of scientists based on scientific publications assessment.
Zerem, Enver
2017-11-01
It is generally accepted that the scientific impact factor (Web of Science) and the total number of citations of the articles published in a journal, are the most relevant parameters of the journal's significance. However, the significance of scientists is much more complicated to establish and the value of their scientific production cannot be directly reflected by the importance of the journals in which their articles are published. Evaluating the significance of scientists' accomplishments involves more complicated metrics than just their publication records. Based on a long term of academic experience, the author proposes objective criteria to estimate the scientific merit of an individual's publication record. This metric can serve as a pragmatic tool and the nidus for discussion within the readership of this journal. Copyright © 2017 Elsevier Inc. All rights reserved.
PACIC Instrument: disentangling dimensions using published validation models.
Iglesias, K; Burnand, B; Peytremann-Bridevaux, I
2014-06-01
To better understand the structure of the Patient Assessment of Chronic Illness Care (PACIC) instrument. More specifically to test all published validation models, using one single data set and appropriate statistical tools. Validation study using data from cross-sectional survey. A population-based sample of non-institutionalized adults with diabetes residing in Switzerland (canton of Vaud). French version of the 20-items PACIC instrument (5-point response scale). We conducted validation analyses using confirmatory factor analysis (CFA). The original five-dimension model and other published models were tested with three types of CFA: based on (i) a Pearson estimator of variance-covariance matrix, (ii) a polychoric correlation matrix and (iii) a likelihood estimation with a multinomial distribution for the manifest variables. All models were assessed using loadings and goodness-of-fit measures. The analytical sample included 406 patients. Mean age was 64.4 years and 59% were men. Median of item responses varied between 1 and 4 (range 1-5), and range of missing values was between 5.7 and 12.3%. Strong floor and ceiling effects were present. Even though loadings of the tested models were relatively high, the only model showing acceptable fit was the 11-item single-dimension model. PACIC was associated with the expected variables of the field. Our results showed that the model considering 11 items in a single dimension exhibited the best fit for our data. A single score, in complement to the consideration of single-item results, might be used instead of the five dimensions usually described. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Knowledge base about earthquakes as a tool to minimize strong events consequences
NASA Astrophysics Data System (ADS)
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Alexander; Kijko, Andrzej
2017-04-01
The paper describes the structure and content of the knowledge base on physical and socio-economical consequences of damaging earthquakes, which may be used for calibration of near real-time loss assessment systems based on simulation models for shaking intensity, damage to buildings and casualties estimates. Such calibration allows to compensate some factors which influence on reliability of expected damage and loss assessment in "emergency" mode. The knowledge base contains the description of past earthquakes' consequences for the area under study. It also includes the current distribution of built environment and population at the time of event occurrence. Computer simulation of the recorded in knowledge base events allow to determine the sets of regional calibration coefficients, including rating of seismological surveys, peculiarities of shaking intensity attenuation and changes in building stock and population distribution, in order to provide minimum error of damaging earthquakes loss estimations in "emergency" mode. References 1. Larionov, V., Frolova, N: Peculiarities of seismic vulnerability estimations. In: Natural Hazards in Russia, volume 6: Natural Risks Assessment and Management, Publishing House "Kruk", Moscow, 120-131, 2003. 2. Frolova, N., Larionov, V., Bonnin, J.: Data Bases Used In Worlwide Systems For Earthquake Loss Estimation In Emergency Mode: Wenchuan Earthquake. In Proc. TIEMS2010 Conference, Beijing, China, 2010. 3. Frolova N. I., Larionov V. I., Bonnin J., Sushchev S. P., Ugarov A. N., Kozlov M. A. Loss Caused by Earthquakes: Rapid Estimates. Natural Hazards Journal of the International Society for the Prevention and Mitigation of Natural Hazards, vol.84, ISSN 0921-030, Nat Hazards DOI 10.1007/s11069-016-2653
Relative Risks for Lethal Prostate Cancer Based on Complete Family History of Prostate Cancer Death.
Albright, Frederick S; Stephenson, Robert A; Agarwal, Neeraj; Cannon-Albright, Lisa A
2017-01-01
There are few published familial relative risks (RR) for lethal prostate cancer. This study estimates RRs for lethal prostate cancer based on comprehensive family history data, with the goal of improving identification of those men at highest risk of dying from prostate cancer. We used a population-based genealogical resource linked to a statewide electronic SEER cancer registry and death certificates to estimate relative risks (RR) for death from prostate cancer based upon family history. Over 600,000 male probands were analyzed, representing a variety of family history constellations of lethal prostate cancer. RR estimates were based on the ratio of the observed to the expected number of lethal prostate cancer cases using internal rates. RRs for lethal prostate cancer based on the number of affected first-degree relatives (FDR) ranged from 2.49 (95% CI: 2.27, 2.73) for exactly 1 FDR to 5.30 (2.13, 10.93) for ≥3 affected FDRs. In an absence of affected FDRs, increased risk was also significant for increasing numbers of affected second-degree or third degree relatives. Equivalent risks were observed for similar maternal and paternal family history. This study provides population-based estimates of lethal prostate cancer risk based on lethal prostate cancer family history. Many family history constellations associated with two to greater than five times increased risk for lethal prostate cancer were identified. These lethal prostate cancer risk estimates hold potential for use in identification, screening, early diagnosis, and treatment of men at high risk for death from prostate cancer. Prostate77:41-48, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Syal, Kirtimaan; Srinivasan, Anand; Banerjee, Dibyajyoti
2013-01-01
To study the potential of commonly used aminoglycoside antibiotics to form non-creatinine chromogen with alkaline picrate reagent. We studied the non-creatinine chromogen formation of various concentrations of streptomycin, amikacin, kanamycin, netilmicin, gentamicin and tobramycin added to known creatinine concentrations by the Jaffe reaction based creatinine estimation. Only streptomycin above therapeutic concentrations of 10mg/mL interfered in the Jaffe reaction and acted as non-creatinine chromogen. Therapeutic doses of the aminoglycosides do not form non-creatinine chromogens. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Geologic Assessment of Coal in the Colorado Plateau: Arizona, Colorado, New Mexico, and Utah
Kirschbaum, Mark A.; Roberts, Lauara N.R.; Biewick, Laura
2000-01-01
This CD-ROM set contains a geologic assessment of coal deposits of the Colorado Plateau region and new resource estimates for selected assessment units within the Colorado Plateau. Original resource estimates (in-place resources before production) for the 12 priority assessment units of the Colorado Plateau exceed one half trillion short tons of coal in beds greater than 1 ft thick and under less than 6,000 ft of overburden. The coal is high quality and low sulfur, and a portion of these resources will provide future energy production for the Nation. Disc 1, in Portable Document Format, contains results of the assessment in summary and (or) technical reports for 12 priority coal assessment units in the Colorado Plateau and also contains an ArcView Data Publisher project, which is an interactive geographic information system of digital data collected during the assessment. Disc 2 contains stratigraphic data bases for seven of the priority coal assessment areas within the Colorado Plateau region and an ArcView project identical to the ArcView Data Publisher project on disc 1 except that it retains some of the functionality that is disabled in the ArcView Data Publisher program.
Lu, Chunling; Black, Maureen M; Richter, Linda M
2016-12-01
A 2007 study published in The Lancet estimated that approximately 219 million children aged younger than 5 years were exposed to stunting or extreme poverty in 2004. We updated the 2004 estimates with the use of improved data and methods and generated estimates for 2010. We used country-level prevalence of stunting in children younger than 5 years based on the 2006 Growth Standards proposed by WHO and poverty ratios from the World Bank to estimate children who were either stunted or lived in extreme poverty for 141 low-income and middle-income countries in 2004 and 2010. To avoid counting the same children twice, we excluded children jointly exposed to stunting and extreme poverty from children living in extreme poverty. To examine the robustness of estimates, we also used moderate poverty measures. The 2007 study underestimated children at risk of poor development. The estimated number of children exposed to the two risk factors in low-income and middle-income countries decreased from 279·1 million (95% CI 250·4 million-307·4 million) in 2004 to 249·4 million (209·3 million-292·6 million) in 2010; prevalence of children at risk fell from 51% (95% CI 46-56) to 43% (36-51). The decline occurred in all income groups and regions with south Asia experiencing the largest drop. Sub-Saharan Africa had the highest prevalence in both years. These findings were robust to variations in poverty measures. Progress has been made in reducing the number of children exposed to stunting or poverty between 2004 and 2010, but this is still not enough. Scaling up of effective interventions targeting the most vulnerable children is urgently needed. National Institutes of Health, Bill & Melinda Gates Foundation, Hilton Foundation, and WHO. Copyright © 2016 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND license. Published by Elsevier Ltd.. All rights reserved.
LIBRA is a fully-automatic breast density estimation software solution based on a published algorithm that works on either raw (i.e., “FOR PROCESSING”) or vendor post-processed (i.e., “FOR PRESENTATION”) digital mammography images. LIBRA has been applied to over 30,000 screening exams and is being increasingly utilized in larger studies.
Height growth and site index curves for Douglas-fir on dry sites in the Willamette National Forest.
Joseph E Means; Mary E. Helm
1985-01-01
Equations and curves are presented for estimating height and site index of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) on hot, dry sites in the Willamette National Forest in western Oregon. The equations are based on the dissected stems of 27 trees. The curves differ from those previously published for Douglas-fir. Instructions are presented...
Uncertainty Propagation and the Fano Based Infromation Theoretic Method: A Radar Example
2015-02-01
Hogg, “Phase transitions and the search problem by, artificial intellience ”, (an Elsevier journal) volume 81, published in 1996, Pages 1- 15. [39] R...dispersion of the mean mutual information of the estimate is low enough to support the use of the linear approximation. M ut ua l In M uf or m at io n
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
An Estimate of Avian Mortality at Communication Towers in the United States and Canada
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G.; Sullivan, Lauren M.; Mutrie, Erin; Gauthreaux, Sidney A.; Avery, Michael L.; Crawford, Robert L.; Manville, Albert M.; Travis, Emilie R.; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action. PMID:22558082
An estimate of avian mortality at communication towers in the United States and Canada.
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G; Sullivan, Lauren M; Mutrie, Erin; Gauthreaux, Sidney A; Avery, Michael L; Crawford, Robert L; Manville, Albert M; Travis, Emilie R; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action.
What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries
Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B
2011-01-01
Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332
Estimating the Cost-Effectiveness of One-Time Screening and Treatment for Hepatitis C in Korea
Kim, Do Young; Han, Kwang-Hyub; Jun, Byungyool; Kim, Tae Hyun; Park, Sohee; Ward, Thomas; Webster, Samantha; McEwan, Phil
2017-01-01
Background and Aims This study aims to investigate the cost-effectiveness of a one-time hepatitis C virus (HCV) screening and treatment program in South Korea where hepatitis B virus (HBV) prevails, in people aged 40–70, compared to current practice (no screening). Methods A published Markov model was used in conjunction with a screening and treatment decision tree to model patient cohorts, aged 40–49, 50–59 and 60–69 years, distributed across chronic hepatitis C (CHC) and compensated cirrhosis (CC) health states (82.5% and 17.5%, respectively). Based on a published seroepidemiology study, HCV prevalence was estimated at 0.60%, 0.80% and 1.53%, respectively. An estimated 71.7% of the population was screened. Post-diagnosis, 39.4% of patients were treated with a newly available all-oral direct-acting antiviral (DAA) regimen over 5 years. Published rates of sustained virologic response, disease management costs, transition rates and utilities were utilised. Results Screening resulted in the identification of 43,635 previously undiagnosed patients across all cohorts. One-time HCV screening and treatment was estimated to be cost-effective across all cohorts; predicted incremental cost-effectiveness ratios ranged from $5,714 to $8,889 per quality-adjusted life year gained. Incremental costs associated with screening, treatment and disease management ranged from $156.47 to $181.85 million USD; lifetime costs-offsets associated with the avoidance of end stage liver disease complications ranged from $51.47 to $57.48 million USD. Conclusions One-time HCV screening and treatment in South Korean people aged 40–70 is likely to be highly cost-effective compared to the current practice of no screening. PMID:28060834
Defense Communications Agency Cost and Planning Factors Manual. Revised
1983-03-01
the Time-Phased Fiscal Year Funding Schedule. Using estimated leadtimes required for each identifiable milestone, estimate the funding to be incurred...for each fiscal year, making sure to back off the time required for the conceptual phase, the procurement phase, and the training and operational...39-1 (To be published later) 40. FISCAL -YEAR TIME PHASING OF COST ESTIMATE ........... 40-1 (To be published later) 41. DISCOUNTING
Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David
2012-12-01
The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
Bytnerowicz, A; Johnson, R F; Zhang, L; Jenerette, G D; Fenn, M E; Schilling, S L; Gonzalez-Fernandez, I
2015-08-01
The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4(+) and NO3(-); stomatal conductance of NH3, NO, NO2 and HNO3; and satellite-derived LAI. Estimated deposition is based on data collected during 2002-2006 in the San Bernardino Mountains (SBM) of southern California. Approximately 2/3 of dry N deposition was to plant surfaces and 1/3 as stomatal uptake. Summer-season N deposition ranged from <3 kg ha(-1) in the eastern SBM to ∼ 60 kg ha(-1) in the western SBM near the Los Angeles Basin and compared well with the throughfall and big-leaf micrometeorological inferential methods. Extrapolating summertime N deposition estimates to annual values showed large areas of the SBM exceeding critical loads for nutrient N in chaparral and mixed conifer forests. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny; Barbose, Galen; Bird, Lori
2014-03-12
More than half of U.S. states have renewable portfolio standards (RPS) in place and have collectively deployed approximately 46,000 MW of new renewable energy capacity through year-end 2012. Most of these policies have five or more years of implementation experience, enabling an assessment of their costs and benefits. Understanding RPS benefits and costs is essential for policymakers evaluating existing RPS policies, assessing the need for modifications, and considering new policies. A key aspect of this study is the comprehensive review of existing RPS cost and benefit estimates, in addition to an examination of the variety of methods used to calculatemore » such estimates. Based on available data and estimates reported by utilities and regulators, this study summarizes RPS costs to date. The study considers how those costs may evolve going forward, given scheduled increases in RPS targets and cost containment mechanisms incorporated into existing policies. The report also summarizes RPS benefits estimates, based on published studies for individual states, and discusses key methodological considerations.« less
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nisbet, A.F.; Woodman, R.F.M.
A database of soil-to-plant transfer factors for radiocesium and radiostrontium has been compiled for arable crops from published and unpublished sources. The database is more extensive than previous compilations of data published by the International Union of Radioecologists, containing new information for Scandinavia and Greece in particular. It also contains ancillary data on important soil characteristics. The database is sub-divided into 28 soil-crop combinations, covering four soil types and seven crop groups. Statistical analyses showed that transfer factors for radiocesium could not generally be predicted as a function of climatic region, type of experiment, age of contamination, or silt characteristics.more » However, significant relationships accounting for more than 30% of the variability in transfer factor were identified between transfer factors for radiostrontium and soil pH/organic matter status for a few soil-crop combinations. Best estimate transfer factors for radiocesium and radiostrontium were calculated for 28 soil-crop combinations, based on their geometric means: only the edible parts were considered. To predict the likely value of future individual transfer factors, 95% confidence intervals were also derived. A comparison of best estimate transfer factors derived in this study with recommended values published by the International Union of Radioecologists in 1989 and 1992 was made for comparable soil-crop groupings. While there were no significant differences between the best estimate values derived in this study and the 1992 data, radiological assessments that still use 1989 data may be unnecessarily cautious.« less
Bröder, Arndt; Malejka, Simone
2017-07-01
The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.
NASA Astrophysics Data System (ADS)
Pan, Yun; Zhang, Chong; Gong, Huili; Yeh, Pat J.-F.; Shen, Yanjun; Guo, Ying; Huang, Zhiyong; Li, Xiaojuan
2017-04-01
Regional evapotranspiration (ET) can be enhanced by human activities such as irrigation or reservoir impoundment. Here the potential of using Gravity Recovery and Climate Experiment (GRACE) terrestrial water storage data in water budget calculations to detect human-induced ET change is investigated over the Haihe River basin of China. Comparison between GRACE-based monthly ET estimate (2005-2012) and Global Land Data Assimilation System (GLDAS)-modeled ET indicates that human-induced ET due to intensive groundwater irrigation from March to May can only be detected by GRACE. GRACE-based ET (521.7±21.1 mm/yr), considerably higher than GLDAS ET (461.7±29.8 mm/yr), agrees well with existing estimates found in the literature and indicates that human activities contribute to a 12% increase in ET. The double-peak seasonal pattern of ET (in May and August) as reported in published studies is well reproduced by GRACE-based ET estimate. This study highlights the unique capability of GRACE in detecting anthropogenic signals over regions with large groundwater consumption.
NASA Astrophysics Data System (ADS)
Pan, Yun; Zhang, Chong; Gong, Huili; Yeh, Pat J.-F.; Shen, Yanjun; Guo, Ying; Huang, Zhiyong; Li, Xiaojuan
2017-01-01
Regional evapotranspiration (ET) can be enhanced by human activities such as irrigation or reservoir impoundment. Here the potential of using Gravity Recovery and Climate Experiment (GRACE) terrestrial water storage data in water budget calculations to detect human-induced ET change is investigated over the Haihe River basin of China. Comparison between GRACE-based monthly ET estimate (2005-2012) and Global Land Data Assimilation System (GLDAS)-modeled ET indicates that human-induced ET due to intensive groundwater irrigation from March to May can only be detected by GRACE. GRACE-based ET (521.7 ± 21.1 mm/yr), considerably higher than GLDAS ET (461.7 ± 29.8 mm/yr), agrees well with existing estimates found in the literature and indicates that human activities contribute to a 12% increase in ET. The double-peak seasonal pattern of ET (in May and August) as reported in published studies is well reproduced by GRACE-based ET estimate. This study highlights the unique capability of GRACE in detecting anthropogenic signals over regions with large groundwater consumption.
Summary of Aquifer Test Data for Arkansas - 1940-2006
Pugh, Aaron L.
2008-01-01
As demands on Arkansas's ground water continue to increase, decision-makers need all available information to ensure the sustainability of this important natural resource. From 1940 through 2006, the U.S. Geological Survey has conducted over 300 aquifer tests in Arkansas. Much of these data never have been published. This report presents the results from 206 of these aquifer tests from 21 different hydrogeologic units spread across 51 Arkansas counties. Ten of the hydrogeologic units are within the Atlantic Plain of Arkansas and consist mostly of unconsolidated and semi-consolidated deposits. The remaining 11 units are within the Interior Highlands consisting mainly of consolidated rock. Descriptive statistics are reported for each hydrologic unit with two or more tests, including the mean, minimum, median, maximum and standard deviation values for specific capacity, transmissivity, hydraulic conductivity, and storage coefficient. Hydraulic conductivity values for the major water-bearing hydrogeologic units are estimated because few conductivity values are recorded in the original records. Nearly all estimated hydraulic conductivity values agree with published hydraulic conductivity values based on the hydrogeologic unit material types. Similarly, because few specific capacity values were available in the original aquifer test records, specific capacity values are estimated for individual wells.
STELLAR ENCOUNTER RATE IN GALACTIC GLOBULAR CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahramian, Arash; Heinke, Craig O.; Sivakoff, Gregory R.
2013-04-01
The high stellar densities in the cores of globular clusters cause significant stellar interactions. These stellar interactions can produce close binary mass-transferring systems involving compact objects and their progeny, such as X-ray binaries and radio millisecond pulsars. Comparing the numbers of these systems and interaction rates in different clusters drives our understanding of how cluster parameters affect the production of close binaries. In this paper we estimate stellar encounter rates ({Gamma}) for 124 Galactic globular clusters based on observational data as opposed to the methods previously employed, which assumed 'King-model' profiles for all clusters. By deprojecting cluster surface brightness profilesmore » to estimate luminosity density profiles, we treat 'King-model' and 'core-collapsed' clusters in the same way. In addition, we use Monte Carlo simulations to investigate the effects of uncertainties in various observational parameters (distance, reddening, surface brightness) on {Gamma}, producing the first catalog of globular cluster stellar encounter rates with estimated errors. Comparing our results with published observations of likely products of stellar interactions (numbers of X-ray binaries, numbers of radio millisecond pulsars, and {gamma}-ray luminosity) we find both clear correlations and some differences with published results.« less
Dalley, C; Basarir, H; Wright, J G; Fernando, M; Pearson, D; Ward, S E; Thokula, P; Krishnankutty, A; Wilson, G; Dalton, A; Talley, P; Barnett, D; Hughes, D; Porter, N R; Reilly, J T; Snowden, J A
2015-04-01
Specialist Integrated Haematological Malignancy Diagnostic Services (SIHMDS) were introduced as a standard of care within the UK National Health Service to reduce diagnostic error and improve clinical outcomes. Two broad models of service delivery have become established: 'co-located' services operating from a single-site and 'networked' services, with geographically separated laboratories linked by common management and information systems. Detailed systematic cost analysis has never been published on any established SIHMDS model. We used Activity Based Costing (ABC) to construct a cost model for our regional 'networked' SIHMDS covering a two-million population based on activity in 2011. Overall estimated annual running costs were £1 056 260 per annum (£733 400 excluding consultant costs), with individual running costs for diagnosis, staging, disease monitoring and end of treatment assessment components of £723 138, £55 302, £184 152 and £94 134 per annum, respectively. The cost distribution by department was 28.5% for haematology, 29.5% for histopathology and 42% for genetics laboratories. Costs of the diagnostic pathways varied considerably; pathways for myelodysplastic syndromes and lymphoma were the most expensive and the pathways for essential thrombocythaemia and polycythaemia vera being the least. ABC analysis enables estimation of running costs of a SIHMDS model comprised of 'networked' laboratories. Similar cost analyses for other SIHMDS models covering varying populations are warranted to optimise quality and cost-effectiveness in delivery of modern haemato-oncology diagnostic services in the UK as well as internationally. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Papaioannou, A.; Thompson, M. F.; Pasquale, M. K.; Adachi, J. D.
2016-01-01
Summary The RisedronatE and ALendronate (REAL) study provided a unique opportunity to conduct cost-effectiveness analyses based on effectiveness data from real-world clinical practice. Using a published osteoporosis model, the researchers found risedronate to be cost-effective compared to generic or brand alendronate for the treatment of Canadian postmenopausal osteoporosis in patients aged 65 years or older. Introduction The REAL study provides robust data on the real-world performance of risedronate and alendronate. The study used these data to assess the cost-effectiveness of brand risedronate versus generic or brand alendronate for treatment of Canadian postmenopausal osteoporosis patients aged 65 years or older. Methods A previously published osteoporosis model was populated with Canadian cost and epidemiological data, and the estimated fracture risk was validated. Effectiveness data were derived from REAL and utility data from published sources. The incremental cost per quality-adjusted life-year (QALY) gained was estimated from a Canadian public payer perspective, and comprehensive sensitivity analyses were conducted. Results The base case analysis found fewer fractures and more QALYs in the risedronate cohort, providing an incremental cost per QALY gained of $3,877 for risedronate compared to generic alendronate. The results were most sensitive to treatment duration and effectiveness. Conclusions The REAL study provided a unique opportunity to conduct cost-effectiveness analyses based on effectiveness data taken from real-world clinical practice. The analysis supports the cost-effectiveness of risedronate compared to generic or brand alendronate and the use of risedronate for the treatment of osteoporotic Canadian women aged 65 years or older with a BMD T-score ≤−2.5. PMID:18008100
Revised reference values for selenium intake.
Kipp, A P; Strohm, D; Brigelius-Flohé, R; Schomburg, L; Bechthold, A; Leschik-Bonnet, E; Heseker, H
2015-10-01
The German, Austrian and Swiss nutrition societies are the joint editors of the 'reference values for nutrient intake'. They have revised the reference values for the intake of selenium and published them in February 2015. The saturation of selenoprotein P (SePP) in plasma is used as a criterion for the derivation of reference values for selenium intake in adults. For persons from selenium-deficient regions (China) SePP saturation was achieved with a daily intake of 49μg of selenium. When using the reference body weights the D-A-CH reference values are based upon, the resulting estimated value for selenium intake is 70μg/day for men and 60μg/day for women. The estimated value for selenium intake for children and adolescents is extrapolated using the estimated value for adults in relation to body weight. For infants aged 0 to under 4 months the estimated value of 10μg/day was derived from the basis of selenium intake via breast milk. For infants aged 4 to under 12 months this estimated value was used and taking into account the differences regarding body weight an estimated value of 15μg/day was derived. For lactating women compared to non-lactating women a higher reference value of 75μg/day is indicated due to the release of selenium with breast milk. The additional selenium requirement for pregnant women is negligible, so that no increased reference value is indicated. Copyright © 2015 The Authors. Published by Elsevier GmbH.. All rights reserved.
Quantification of residual dose estimation error on log file-based patient dose calculation.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2016-05-01
The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Buried transuranic wastes at ORNL: Review of past estimates and reconciliation with current data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trabalka, J.R.
1997-09-01
Inventories of buried (generally meaning disposed of) transuranic (TRU) wastes at Oak Ridge National Laboratory (ORNL) have been estimated for site remediation and waste management planning over a period of about two decades. Estimates were required because of inadequate waste characterization and incomplete disposal records. For a variety of reasons, including changing definitions of TRU wastes, differing objectives for the estimates, and poor historical data, the published results have sometimes been in conflict. The purpose of this review was (1) to attempt to explain both the rationale for and differences among the various estimates, and (2) to update the estimatesmore » based on more recent information obtained from waste characterization and from evaluations of ORNL waste data bases and historical records. The latter included information obtained from an expert panel`s review and reconciliation of inconsistencies in data identified during preparation of the ORNL input for the third revision of the Baseline Inventory Report for the Waste Isolation Pilot Plant. The results summarize current understanding of the relationship between past estimates of buried TRU wastes and provide the most up-to-date information on recorded burials thereafter. The limitations of available information on the latter and thus the need for improved waste characterization are highlighted.« less
Kork, F; Balzer, F; Krannich, A; Bernardi, M H; Eltzschig, H K; Jankowski, J; Spies, C
2017-03-01
Acute kidney injury (AKI) is diagnosed by a 50% increase in creatinine. For patients without a baseline creatinine measurement, guidelines suggest estimating baseline creatinine by back-calculation. The aim of this study was to evaluate different glomerular filtration rate (GFR) equations and different GFR assumptions for back-calculating baseline creatinine as well as the effect on the diagnosis of AKI. The Modification of Diet in Renal Disease, the Chronic Kidney Disease Epidemiology (CKD-EPI) and the Mayo quadratic (MQ) equation were evaluated to estimate baseline creatinine, each under the assumption of either a fixed GFR of 75 mL min -1 1.73 m -2 or an age-adjusted GFR. Estimated baseline creatinine, diagnoses and severity stages of AKI based on estimated baseline creatinine were compared to measured baseline creatinine and corresponding diagnoses and severity stages of AKI. The data of 34 690 surgical patients were analysed. Estimating baseline creatinine overestimated baseline creatinine. Diagnosing AKI based on estimated baseline creatinine had only substantial agreement with AKI diagnoses based on measured baseline creatinine [Cohen's κ ranging from 0.66 (95% CI 0.65-0.68) to 0.77 (95% CI 0.76-0.79)] and overestimated AKI prevalence with fair sensitivity [ranging from 74.3% (95% CI 72.3-76.2) to 90.1% (95% CI 88.6-92.1)]. Staging AKI severity based on estimated baseline creatinine had moderate agreement with AKI severity based on measured baseline creatinine [Cohen's κ ranging from 0.43 (95% CI 0.42-0.44) to 0.53 (95% CI 0.51-0.55)]. Diagnosing AKI and staging AKI severity on the basis of estimated baseline creatinine in surgical patients is not feasible. Patients at risk for post-operative AKI should have a pre-operative creatinine measurement to adequately assess post-operative AKI. © 2016 Scandinavian Physiological Society. Published by John Wiley & Sons Ltd.
Bresnahan, Brian W; Rundell, Sean D; Dagadakis, Marissa C; Sullivan, Sean D; Jarvik, Jeffrey G; Nguyen, Hiep; Friedly, Janna L
2013-08-01
To systematically appraise published comparative effectiveness evidence (clinical and economic) of epidural steroid injections (ESI) for lumbar spinal stenosis and to estimate Medicare reimbursement amounts for ESI procedures. TYPE: Systematic review. PubMed, Embase, and CINAHL were searched through August 2012 for key words that pertain to low back pain, spinal stenosis or sciatica, and epidural steroid injection. We used institutional and Medicare reimbursement amounts for our cost estimation. Articles published in English that assessed ESIs for adults with lumbar spinal stenosis versus a comparison intervention were included. Our search identified 146 unique articles, and 138 were excluded due to noncomparative study design, not having a study population with lumbar spinal stenosis, not having an appropriate outcome, or not being in English. We fully summarized 6 randomized controlled trials and 2 large observational studies. Randomized controlled trial articles were reviewed, and the study population, sample size, treatment groups, ESI dosage, ESI approaches, concomitant interventions, outcomes, and follow-up time were reported. Descriptive resource use estimates for ESIs were calculated with use of data from our institution during 2010 and Medicare-based reimbursement amounts. ESIs or anesthetic injections alone resulted in better short-term improvement in walking distance compared with control injections. However, there were no longer-term differences. No differences between ESIs versus anesthetic in self-reported improvement in pain were reported. Transforaminal approaches had better improvement in pain scores (≤4 months) compared with interlaminar injections. Two observational studies indicated increased rates of lumbar ESI in Medicare beneficiaries. Our sample included 279 patients who received at least 1 ESI during 2010, with an estimated mean total outpatient reimbursement for one ESI procedure "event" to be $637, based on 2010 Medicare reimbursement amounts ($505 technical and $132 professional payments). This systematic review of ESI for treating lumbar spinal stenosis found a limited amount of data that suggest that ESI is effective in some patients for improving select short-term outcomes, but results differed depending on study design, outcome measures used, and comparison groups evaluated. Overall, there are relatively few comparative clinical or economic studies for ESI procedures for lumbar spinal stenosis in adults, which indicated a need for additional evidence. Copyright © 2013. Published by Elsevier Inc.
Choo, Wan Yuen; Hairi, Noran Naqiah; Sooryanarayana, Rajini; Yunus, Raudah Mohd; Hairi, Farizah Mohd; Ismail, Norliana; Kandiben, Shathanapriya; Mohd Ali, Zainudin; Ahmad, Sharifah Nor; Abdul Razak, Inayah; Othman, Sajaratulnisah; Tan, Maw Pin; Mydin, Fadzilah Hanum Mohd; Peramalah, Devi; Brownell, Patricia; Bulgiba, Awang
2016-05-25
Despite being now recognised as a global health concern, there is still an inadequate amount of research into elder mistreatment, especially in low and middle-income regions. The purpose of this paper is to report on the design and methodology of a population-based cohort study on elder mistreatment among the older Malaysian population. The study aims at gathering data and evidence to estimate the prevalence and incidence of elder mistreatment, identify its individual, familial and social determinants, and quantify its health consequences. This is a community-based prospective cohort study using randomly selected households from the national census. A multistage sampling method was employed to obtain a total of 2496 older adults living in the rural Kuala Pilah district. The study is divided into two phases: cross-sectional study (baseline), and a longitudinal follow-up study at the third and fifth years. Elder mistreatment was measured using instrument derived from the previous literature and modified Conflict Tactic Scales. Outcomes of elder mistreatment include mortality, physical function, mental health, quality of life and health utilisation. Logistic regression models are used to examine the relationship between risk factors and abuse estimates. Cox proportional hazard regression will be used to estimate risk of mortality associated with abuse. Associated annual rate of hospitalisation and health visit frequency, and reporting of abuse, will be estimated using Poisson regression. The study has been approved by the Medical Ethics Committee of the University of Malaya Medical Center (MEC Ref 902.2) and the Malaysian National Medical Research Register (NMRR-12-1444-11726). Written consent was obtained from all respondents prior to baseline assessment and subsequent follow-up. Findings will be disseminated to local stakeholders via forums with community leaders, and health and social welfare departments, and published in appropriate scientific journals and presented at conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
ClonEvol: clonal ordering and visualization in cancer sequencing.
Dang, H X; White, B S; Foltz, S M; Miller, C A; Luo, J; Fields, R C; Maher, C A
2017-12-01
Reconstruction of clonal evolution is critical for understanding tumor progression and implementing personalized therapies. This is often done by clustering somatic variants based on their cellular prevalence estimated via bulk tumor sequencing of multiple samples. The clusters, consisting of the clonal marker variants, are then ordered based on their estimated cellular prevalence to reconstruct clonal evolution trees, a process referred to as 'clonal ordering'. However, cellular prevalence estimate is confounded by statistical variability and errors in sequencing/data analysis, and therefore inhibits accurate reconstruction of the clonal evolution. This problem is further complicated by intra- and inter-tumor heterogeneity. Furthermore, the field lacks a comprehensive visualization tool to facilitate the interpretation of complex clonal relationships. To address these challenges we developed ClonEvol, a unified software tool for clonal ordering, visualization, and interpretation. ClonEvol uses a bootstrap resampling technique to estimate the cellular fraction of the clones and probabilistically models the clonal ordering constraints to account for statistical variability. The bootstrapping allows identification of the sample founding- and sub-clones, thus enabling interpretation of clonal seeding. ClonEvol automates the generation of multiple widely used visualizations for reconstructing and interpreting clonal evolution. ClonEvol outperformed three of the state of the art tools (LICHeE, Canopy and PhyloWGS) for clonal evolution inference, showing more robust error tolerance and producing more accurate trees in a simulation. Building upon multiple recent publications that utilized ClonEvol to study metastasis and drug resistance in solid cancers, here we show that ClonEvol rediscovered relapsed subclones in two published acute myeloid leukemia patients. Furthermore, we demonstrated that through noninvasive monitoring ClonEvol recapitulated the emerging subclones throughout metastatic progression observed in the tumors of a published breast cancer patient. ClonEvol has broad applicability for longitudinal monitoring of clonal populations in tumor biopsies, or noninvasively, to guide precision medicine. ClonEvol is written in R and is available at https://github.com/ChrisMaherLab/ClonEvol. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Brysbaert, Marc; Keuleers, Emmanuel; New, Boris
2011-01-01
In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies. PMID:21713191
Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial
NASA Astrophysics Data System (ADS)
Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.
2011-03-01
Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.
CORKSCREW 2013 CORK study of children's realistic estimation of weight.
Skrobo, Darko; Kelleher, Gemma
2015-01-01
In a resuscitation situation involving a child (age 1-15 years) it is crucial to obtain a weight as most interventions and management depend on it. The APLS formula, '2×(age+4)', is taught via the APLS course and is widely used in Irish hospitals. As the prevalence of obesity is increasing the accuracy of the formula has been questioned and a newer formula has been suggested, the Luscombe and Owens (LO) formula, '(3×age)+7'. To gather data on the weights and ages of the Cork paediatric population (ages 1-15 years) attending services at the Cork University Hospital (CUH), and to identify which of the two age-based weight estimation formulae has best diagnostic accuracy. CUH, Ireland's only level one trauma centre. Retrospective data collection from charts in the Emergency Department, Paediatric Assessment Unit and the Paediatric wards of CUH. 3155 children aged 1-15 years were included in the study. There were 1344 girls and 1811 boys. The formula weight='2×(age+4)' underestimated children's weights by a mean of 20.3% (95% CI 19.7% to 20.9%) for the ages of 1-15 years. The LO formula weight='(3×age)+7' showed a mean underestimation of 4.0% (95% CI 3.3% to 4.6%) for the same age range. The LO formula has been validated in several studies and proven to be a superior age-based weight estimation formula in many western emergency departments. This study shows that the LO formula leads to less underestimation of weights in Irish children than the APLS formula. It is a simple, safe and more accurate age-based estimation formula that can be used over a large age range (1-15 years). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Data-based fault-tolerant control for affine nonlinear systems with actuator faults.
Xie, Chun-Hua; Yang, Guang-Hong
2016-09-01
This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
Korenromp, Eline L; Mahiané, Guy; Rowley, Jane; Nagelkerke, Nico; Abu-Raddad, Laith; Ndowa, Francis; El-Kettani, Amina; El-Rhilani, Houssine; Mayaud, Philippe; Chico, R Matthew; Pretorius, Carel; Hecht, Kendall; Wi, Teodora
2017-12-01
To develop a tool for estimating national trends in adult prevalence of sexually transmitted infections by low- and middle-income countries, using standardised, routinely collected programme indicator data. The Spectrum-STI model fits time trends in the prevalence of active syphilis through logistic regression on prevalence data from antenatal clinic-based surveys, routine antenatal screening and general population surveys where available, weighting data by their national coverage and representativeness. Gonorrhoea prevalence was fitted as a moving average on population surveys (from the country, neighbouring countries and historic regional estimates), with trends informed additionally by urethral discharge case reports, where these were considered to have reasonably stable completeness. Prevalence data were adjusted for diagnostic test performance, high-risk populations not sampled, urban/rural and male/female prevalence ratios, using WHO's assumptions from latest global and regional-level estimations. Uncertainty intervals were obtained by bootstrap resampling. Estimated syphilis prevalence (in men and women) declined from 1.9% (95% CI 1.1% to 3.4%) in 2000 to 1.5% (1.3% to 1.8%) in 2016 in Zimbabwe, and from 1.5% (0.76% to 1.9%) to 0.55% (0.30% to 0.93%) in Morocco. At these time points, gonorrhoea estimates for women aged 15-49 years were 2.5% (95% CI 1.1% to 4.6%) and 3.8% (1.8% to 6.7%) in Zimbabwe; and 0.6% (0.3% to 1.1%) and 0.36% (0.1% to 1.0%) in Morocco, with male gonorrhoea prevalences 14% lower than female prevalence. This epidemiological framework facilitates data review, validation and strategic analysis, prioritisation of data collection needs and surveillance strengthening by national experts. We estimated ongoing syphilis declines in both Zimbabwe and Morocco. For gonorrhoea, time trends were less certain, lacking recent population-based surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A new approach to estimating trends in chlamydia incidence.
Ali, Hammad; Cameron, Ewan; Drovandi, Christopher C; McCaw, James M; Guy, Rebecca J; Middleton, Melanie; El-Hayek, Carol; Hocking, Jane S; Kaldor, John M; Donovan, Basil; Wilson, David P
2015-11-01
Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method for estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shapes of these beta parameters were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour small changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures, that is, prevalence and incidence as reported by other studies. Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ∼120% over 12 years. Nationally, an estimated 356 000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (∼90% increase). We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Ekwueme, Donatus U; Allaire, Benjamin T; Parish, William J; Thomas, Cheryll C; Poehler, Diana; Guy, Gery P; Aldridge, Arnie P; Lahoti, Sejal R; Fairley, Temeika L; Trogdon, Justin G
2017-09-01
This study estimated the percentage of breast cancer cases, total number of incident cases, and total annual medical care costs attributable to alcohol consumption among insured younger women (aged 18-44 years) by type of insurance and stage at diagnosis. The study used the 2012-2013 National Survey on Drug Use and Health, cancer incidence data from two national registry programs, and published relative risk measures to estimate the: (1) alcohol-attributable fraction of breast cancer cases among younger women by insurance type; (2) total number of breast cancer incident cases attributable to alcohol consumption by stage at diagnosis and insurance type among younger women; and (3) total annual medical care costs of treating breast cancer incident cases attributable to alcohol consumption among younger women. Analyses were conducted in 2016; costs were expressed in 2014 U.S. dollars. Among younger women enrolled in Medicaid, private insurance, and both groups, 8.7% (95% CI=7.4%, 10.0%), 13.8% (95% CI=13.3%, 14.4%), and 12.3% (95% CI=11.4%, 13.1%) of all breast cancer cases, respectively, were attributable to alcohol consumption. Localized stage was the largest proportion of estimated attributable incident cases. The estimated total number of breast cancer incident alcohol-attributable cases was 1,636 (95% CI=1,570, 1,703) and accounted for estimated total annual medical care costs of $148.4 million (95% CI=$140.6 million, $156.1 million). Alcohol-attributable breast cancer has estimated medical care costs of nearly $150 million per year. The current findings could be used to support evidence-based interventions to reduce alcohol consumption in younger women. Published by Elsevier Inc.
Sardar, Tridip; Rana, Sourav; Bhattacharya, Sabyasachi; Al-Khaled, Kamel; Chattopadhyay, Joydev
2015-05-01
In the present investigation, three mathematical models on a common single strain mosquito-transmitted diseases are considered. The first one is based on ordinary differential equations, and other two models are based on fractional order differential equations. The proposed models are validated using published monthly dengue incidence data from two provinces of Venezuela during the period 1999-2002. We estimate several parameters of these models like the order of the fractional derivatives (in case of two fractional order systems), the biting rate of mosquito, two probabilities of infection, mosquito recruitment and mortality rates, etc., from the data. The basic reproduction number, R0, for the ODE system is estimated using the data. For two fractional order systems, an upper bound for, R0, is derived and its value is obtained using the published data. The force of infection, and the effective reproduction number, R(t), for the three models are estimated using the data. Sensitivity analysis of the mosquito memory parameter with some important responses is worked out. We use Akaike Information Criterion (AIC) to identify the best model among the three proposed models. It is observed that the model with memory in both the host, and the vector population provides a better agreement with epidemic data. Finally, we provide a control strategy for the vector-borne disease, dengue, using the memory of the host, and the vector. Copyright © 2015 Elsevier Inc. All rights reserved.
Forsum, Elisabet; Henriksson, Pontus; Löf, Marie
2014-01-01
A possibility to assess body composition during pregnancy is often important. Estimating body density (DB) and use the two-component model (2CM) to calculate total body fat (TBF) represents an option. However, this approach has been insufficiently evaluated during pregnancy. We evaluated the 2CM, and estimated fat-free mass (FFM) density and variability in 17 healthy women before pregnancy, in gestational weeks 14 and 32, and 2 weeks postpartum based on DB (underwater weighing), total body water (deuterium dilution) and body weight, assessed on these four occasions. TBF, calculated using the 2CM and published FFM density (TBF2CM), was compared to reference estimates obtained using the three-component model (TBF3CM). TBF2CM minus TBF3CM (mean ± 2SD) was −1.63 ± 5.67 (p = 0.031), −1.39 ± 7.75 (p = 0.16), −0.38 ± 4.44 (p = 0.49) and −1.39 ± 5.22 (p = 0.043) % before pregnancy, in gestational weeks 14 and 32 and 2 weeks postpartum, respectively. The effect of pregnancy on the variability of FFM density was larger in gestational week 14 than in gestational week 32. The 2CM, based on DB and published FFM density, assessed body composition as accurately in gestational week 32 as in non-pregnant adults. Corresponding values in gestational week 14 were slightly less accurate than those obtained before pregnancy. PMID:25526240
Gubbels, Sophie; Nielsen, Kenn Schultz; Sandegaard, Jakob; Mølbak, Kåre; Nielsen, Jens
2016-11-01
The Danish National Patient Registry (DNPR) contains clinical and administrative data on all patients treated in Danish hospitals. The data model used for reporting is based on standardized coding of contacts rather than courses of admissions and ambulatory care. To reconstruct a coherent picture of courses of admission and ambulatory care, we designed an algorithm with 28 rules that manages transfers between departments, between hospitals and inconsistencies in the data, e.g., missing time stamps, overlaps and gaps. We used data from patients admitted between 1 January 2010 and 31 December 2014. After application of the DNPR algorithm, we estimated an average of 1,149,616 courses of admission per year or 205 hospitalizations per 1000 inhabitants per year. The median length of stay decreased from 1.58days in 2010 to 1.29days in 2014. The number of transfers between departments within a hospital increased from 111,576 to 176,134 while the number of transfers between hospitals decreased from 68,522 to 61,203. We standardized a 28-rule algorithm to relate registrations in the DNPR to each other in a coherent way. With the algorithm, we estimated 1.15 million courses of admissions per year, which probably reflects a more accurate estimate than the estimates that have been published previously. Courses of admission became shorter between 2010 and 2014 and outpatient contacts longer. These figures are compatible with a cost-conscious secondary healthcare system undertaking specialized treatment within a hospital and limiting referral to advanced services at other hospitals. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Florence, Curtis S; Zhou, Chao; Luo, Feijun; Xu, Likang
2016-10-01
It is important to understand the magnitude and distribution of the economic burden of prescription opioid overdose, abuse, and dependence to inform clinical practice, research, and other decision makers. Decision makers choosing approaches to address this epidemic need cost information to evaluate the cost effectiveness of their choices. To estimate the economic burden of prescription opioid overdose, abuse, and dependence from a societal perspective. Incidence of fatal prescription opioid overdose from the National Vital Statistics System, prevalence of abuse and dependence from the National Survey of Drug Use and Health. Fatal data are for the US population, nonfatal data are a nationally representative sample of the US civilian noninstitutionalized population ages 12 and older. Cost data are from various sources including health care claims data from the Truven Health MarketScan Research Databases, and cost of fatal cases from the WISQARS (Web-based Injury Statistics Query and Reporting System) cost module. Criminal justice costs were derived from the Justice Expenditure and Employment Extracts published by the Department of Justice. Estimates of lost productivity were based on a previously published study. Calendar year 2013. Monetized burden of fatal overdose and abuse and dependence of prescription opioids. The total economic burden is estimated to be $78.5 billion. Over one third of this amount is due to increased health care and substance abuse treatment costs ($28.9 billion). Approximately one quarter of the cost is borne by the public sector in health care, substance abuse treatment, and criminal justice costs. These estimates can assist decision makers in understanding the magnitude of adverse health outcomes associated with prescription opioid use such as overdose, abuse, and dependence.
Progressive multifocal leukoencephalopathy after fingolimod treatment.
Berger, Joseph R; Cree, Bruce A; Greenberg, Benjamin; Hemmer, Bernhard; Ward, Brian J; Dong, Victor M; Merschhemke, Martin
2018-05-15
We describe the characteristics of the 15 patients with fingolimod-associated progressive multifocal leukoencephalopathy (PML) identified from the Novartis data safety base and provide risk estimates for the disorder. The Novartis safety database was searched for PML cases with a data lock point of August 31, 2017. PML classification was based on previously published criteria. The risk and incidence were estimated using the 15 patients with confirmed PML and the overall population of patients treated with fingolimod. As of August 31, 2017, 15 fingolimod-treated patients had developed PML in the absence of natalizumab treatment in the preceding 6 months. Eleven (73%) were women and the mean age was 53 years (median: 53 years). Fourteen of the 15 patients were treated with fingolimod for >2 years. Two patients had confounding medical conditions. Two patients had natalizumab treatment. This included one patient whose last dose of natalizumab was 3 years and 9 months before the diagnosis of PML. The second patient was receiving fingolimod for 4 years and 6 months, which was discontinued to start natalizumab and was diagnosed with PML 3 months after starting natalizumab. Absolute lymphocyte counts were available for 14 of the 15 patients and none exhibited a sustained grade 4 lymphopenia (≤200 cells/μL). The risk of PML with fingolimod in the absence of prior natalizumab treatment is low. The estimated risk was 0.069 per 1,000 patients (95% confidence interval: 0.039-0.114), and the estimated incidence rate was 3.12 per 100,000 patient-years (95% confidence interval: 1.75-5.15). Neither clinical manifestations nor radiographic features suggested any unique features of fingolimod-associated PML. Copyright © 2018 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.
Segal, Leonie; Guy, Sophie; Leach, Matthew; Groves, Aaron; Turnbull, Catherine; Furber, Gareth
2018-06-01
High-quality mental health services for infants, children, adolescents, and their families can improve outcomes for children exposed to early trauma. We sought to estimate the workforce needed to deliver tertiary-level community mental health care to all infants, children, adolescents, and their families in need using a generalisable model, applied to South Australia (SA). Workforce estimates were determined using a workforce planning model. Clinical need was established using data from the Longitudinal Study of Australian Children and the Young Minds Matter survey. Care requirements were derived by workshopping clinical pathways with multiprofessional panels, testing derived estimates through an online survey of clinicians. Prevalence of tertiary-level need, defined by severity and exposure to childhood adversities, was estimated at 5-8% across infancy and childhood, and 16% in mid-adolescence. The derived care pathway entailed reception, triage, and follow-up (mean 3 h per patient), core clinical management (mean 27 h per patient per year), psychiatric oversight (mean 4 h per patient per year), specialised clinical role (mean 12 h per patient per year), and socioeconomic support (mean 12 h per patient per year). The modelled clinical full-time equivalent was 947 people and budget was AU$126 million, more than five times the current service level. Our novel needs-based workforce model produced actionable estimates of the community workforce needed to address tertiary-level mental health needs in infants, children, adolescents, and their families in SA. A considerable expansion in the skilled workforce is needed to support young people facing current distress and associated family-based adversities. Because mental illness is implicated in so many burgeoning social ills, addressing this shortfall could have wide-ranging benefits. National Health and Medical Research Council (Australia), Department of Health SA. Copyright © 2018 The Authors. Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license. Published by Elsevier Ltd.. All rights reserved.
An Analysis of the Published Mineral Resource Estimates of the Haji-Gak Iron Deposit, Afghanistan
Sutphin, D.M.; Renaud, K.M.; Drew, L.J.
2011-01-01
The Haji-Gak iron deposit of eastern Bamyan Province, eastern Afghanistan, was studied extensively and resource calculations were made in the 1960s by Afghan and Russian geologists. Recalculation of the resource estimates verifies the original estimates for categories A (in-place resources known in detail), B (in-place resources known in moderate detail), and C 1 (in-place resources estimated on sparse data), totaling 110. 8 Mt, or about 6% of the resources as being supportable for the methods used in the 1960s. C 2 (based on a loose exploration grid with little data) resources are based on one ore grade from one drill hole, and P 2 (prognosis) resources are based on field observations, field measurements, and an ore grade derived from averaging grades from three better sampled ore bodies. C 2 and P 2 resources are 1,659. 1 Mt or about 94% of the total resources in the deposit. The vast P 2 resources have not been drilled or sampled to confirm their extent or quality. The purpose of this article is to independently evaluate the resources of the Haji-Gak iron deposit by using the available geologic and mineral resource information including geologic maps and cross sections, sampling data, and the analog-estimating techniques of the 1960s to determine the size and tenor of the deposit. ?? 2011 International Association for Mathematical Geology (outside the USA).
Rahman, Md Shafiur; Rahman, Md Mizanur; Gilmour, Stuart; Swe, Khin Thet; Krull Abe, Sarah; Shibuya, Kenji
2018-01-01
Many countries are implementing health system reforms to achieve universal health coverage (UHC) by 2030. To understand the progress towards UHC in Bangladesh, we estimated trends in indicators of the health service and of financial risk protection. We also estimated the probability of Bangladesh's achieving of UHC targets of 80% essential health-service coverage and 100% financial risk protection by 2030. We estimated the coverage of UHC indicators-13 prevention indicators and four treatment indicators-from 19 nationally representative population-based household surveys done in Bangladesh from Jan 1, 1991, to Dec 31, 2014. We used a Bayesian regression model to estimate the trend and to predict the coverage of UHC indicators along with the probabilities of achieving UHC targets of 80% coverage of health services and 100% coverage of financial risk protection from catastrophic and impoverishing health payments by 2030. We used the concentration index and relative index of inequality to assess wealth-based inequality in UHC indicators. If the current trends remain unchanged, we estimated that coverage of childhood vaccinations, improved water, oral rehydration treatment, satisfaction with family planning, and non-use of tobacco will achieve the 80% target by 2030. However, coverage of four antenatal care visits, facility-based delivery, skilled birth attendance, postnatal checkups, care seeking for pneumonia, exclusive breastfeeding, non-overweight, and adequate sanitation were not projected to achieve the target. Quintile-specific projections showed wide wealth-based inequality in access to antenatal care, postnatal care, delivery care, adequate sanitation, and care seeking for pneumonia, and this inequality was projected to continue for all indicators. The incidence of catastrophic health expenditure and impoverishment were projected to increase from 17% and 4%, respectively, in 2015, to 20% and 9%, respectively, by 2030. Inequality analysis suggested that wealthiest households would disproportionately face more financial catastrophe than the most disadvantaged households. Despite progress, Bangladesh will not achieve the 2030 UHC targets unless the country scales up interventions related to maternal and child health services, and reforms health financing systems to avoid high dependency on out-of-pocket payments. The introduction of a national health insurance system, increased public funding for health care, and expansion of community-based clinics in rural areas could help to move the country towards UHC. Japan Ministry of Health, Labour, and Welfare. Copyright © 2018 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license. Published by Elsevier Ltd.. All rights reserved.
Parsons, Christine E; Crane, Catherine; Parsons, Liam J; Fjorback, Lone Overby; Kuyken, Willem
2017-08-01
Mindfulness-Based Cognitive Therapy (MBCT) and Mindfulness-Based Stress Reduction (MBSR) emphasize the importance of mindfulness practice at home as an integral part of the program. However, the extent to which participants complete their assigned practice is not yet clear, nor is it clear whether this practice is associated with positive outcomes. For this systematic review and meta-analysis, searches were performed using Scopus and PubMed for studies published through to the end of 2015, reporting on formal home practice of mindfulness by MBSR or MBCT participants. Across 43 studies (N = 1427), the pooled estimate for participants' home practice was 64% of the assigned amount, equating to about 30 minutes per day, six days per week [95% CI 60-69%]. There was substantial heterogeneity associated with this estimate. Across 28 studies (N = 898), there was a small but significant association between participants' self-reported home practice and intervention outcomes (r = 0·26, 95% CI 0·19,-0·34). MBSR and MBCT participants report completing substantial formal mindfulness practice at home over the eight-week intervention, albeit less than assigned amounts. There is a small but significant association between the extent of formal practice and positive intervention outcomes for a wide range of participants. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Anderson, R W G; Hutchinson, T P
2009-03-01
The motivation for this paper is the high rate of inappropriate child restraint selection in cars that is apparent in published surveys of child restraint use and how the public health messages promoting child restraints might respond. Advice has increasingly been given solely according to the child's weight, while many parents do not know the weight of their children. A common objection to promoting restraint use based on the age of the child is the imprecision of such advice, given the variation in the size of children, but the magnitude of the misclassification such advice would produce has never been estimated. This paper presents a method for estimating the misclassification of children by weight, when advice is posed in terms of age, and applies it to detailed child growth data published by the Centers for Disease Control and Prevention. In Australia, guidelines instructing all parents to promote their children from an infant restraint to a forward-facing child seat at 6 months, and then to a belt-positioning booster at 4 years, would mean that 5% of all children under the age of 6 years would be using a restraint not suited to their weight. Coordination of aged-based advice and the weight ranges chosen for the Australian Standard on child restraints could reduce this level of misclassification to less than 1%. The general method developed may also be applied to other aspects of restraint design that are more directly relevant to good restraint fit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The purpose of this report is to: (a) review the extensive published and unpublished literature on the geochemistry, hydrology and geology of Lake Magadi, Kenya, and its associated hot springs; (b) based on this review of field visits, estimate the temperature in the geothermal reservoir beneath the lake; and (c) from this, develop a plan to determine the potential for the development of geothermal electric power at Lake Magadi. 6 refs., 9 figs., 2 tabs.
United States pulpwood receipts : softwood and hardwood, roundwood and residues, 1950-1989
C. Denise Ingrain; Irene Durbak; Peter Ince
1993-01-01
This report shows pulpwood receipts at pulp mills in the United States for the period 1950-1989. It is a compilation of published and estimated data based on information from various sources, including the American Pulpwood Association, American Paper Institute, U.S. Bureau of the Census, and the USDA Forest Service. Trends are shown in the use of hardwoods compared to...
ANDROMEDA DWARFS IN LIGHT OF MODIFIED NEWTONIAN DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGaugh, Stacy; Milgrom, Mordehai
We compare the recently published velocity dispersions for 17 Andromeda dwarf spheroidals with estimates of the modified Newtonian dynamics predictions, based on the luminosities of these dwarfs, with reasonable stellar mass-to-light values and no dark matter. We find that the two are consistent within the uncertainties. We further predict the velocity dispersions of another 10 dwarfs for which only photometric data are currently available.
A Descriptive Epidemiology of Screen-Based Media Use in Youth: A Review and Critique
ERIC Educational Resources Information Center
Marshall, Simon J.; Gorely, Trish; Biddle, Stuart J. H.
2006-01-01
The purpose of this systematic review was to (i) estimate the prevalence and dose of television (TV) viewing, video game playing and computer use, and (ii) assess age-related and (iii) secular trends in TV viewing among youth ([less than or equal] 18 yr). Ninety studies published in English language journals between 1949 and 2004 were included,…
MTPA control of mechanical sensorless IPMSM based on adaptive nonlinear control.
Najjar-Khodabakhsh, Abbas; Soltani, Jafar
2016-03-01
In this paper, an adaptive nonlinear control scheme has been proposed for implementing maximum torque per ampere (MTPA) control strategy corresponding to interior permanent magnet synchronous motor (IPMSM) drive. This control scheme is developed in the rotor d-q axis reference frame using adaptive input-output state feedback linearization (AIOFL) method. The drive system control stability is supported by Lyapunov theory. The motor inductances are online estimated by an estimation law obtained by AIOFL. The estimation errors of these parameters are proved to be asymptotically converged to zero. Based on minimizing the motor current amplitude, the MTPA control strategy is performed by using the nonlinear optimization technique while considering the online reference torque. The motor reference torque is generated by a conventional rotor speed PI controller. By performing MTPA control strategy, the generated online motor d-q reference currents were used in AIOFL controller to obtain the SV-PWM reference voltages and the online estimation of the motor d-q inductances. In addition, the stator resistance is online estimated using a conventional PI controller. Moreover, the rotor position is detected using the online estimation of the stator flux and online estimation of the motor q-axis inductance. Simulation and experimental results obtained prove the effectiveness and the capability of the proposed control method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
How has our knowledge of dinosaur diversity through geologic time changed through research history?
2018-01-01
Assessments of dinosaur macroevolution at any given time can be biased by the historical publication record. Recent studies have analysed patterns in dinosaur diversity that are based on secular variations in the numbers of published taxa. Many of these have employed a range of approaches that account for changes in the shape of the taxonomic abundance curve, which are largely dependent on databases compiled from the primary published literature. However, how these ‘corrected’ diversity patterns are influenced by the history of publication remains largely unknown. Here, we investigate the influence of publication history between 1991 and 2015 on our understanding of dinosaur evolution using raw diversity estimates and shareholder quorum subsampling for the three major subgroups: Ornithischia, Sauropodomorpha, and Theropoda. We find that, while sampling generally improves through time, there remain periods and regions in dinosaur evolutionary history where diversity estimates are highly volatile (e.g. the latest Jurassic of Europe, the mid-Cretaceous of North America, and the Late Cretaceous of South America). Our results show that historical changes in database compilation can often substantially influence our interpretations of dinosaur diversity. ‘Global’ estimates of diversity based on the fossil record are often also based on incomplete, and distinct regional signals, each subject to their own sampling history. Changes in the record of taxon abundance distribution, either through discovery of new taxa or addition of existing taxa to improve sampling evenness, are important in improving the reliability of our interpretations of dinosaur diversity. Furthermore, the number of occurrences and newly identified dinosaurs is still rapidly increasing through time, suggesting that it is entirely possible for much of what we know about dinosaurs at the present to change within the next 20 years. PMID:29479504
Cost-effectiveness analysis of a hospital electronic medication management system.
Westbrook, Johanna I; Gospodarevskaya, Elena; Li, Ling; Richardson, Katrina L; Roffe, David; Heywood, Maureen; Day, Richard O; Graves, Nicholas
2015-07-01
To conduct a cost-effectiveness analysis of a hospital electronic medication management system (eMMS). We compared costs and benefits of paper-based prescribing with a commercial eMMS (CSC MedChart) on one cardiology ward in a major 326-bed teaching hospital, assuming a 15-year time horizon and a health system perspective. The eMMS implementation and operating costs were obtained from the study site. We used data on eMMS effectiveness in reducing potential adverse drug events (ADEs), and potential ADEs intercepted, based on review of 1 202 patient charts before (n = 801) and after (n = 401) eMMS. These were combined with published estimates of actual ADEs and their costs. The rate of potential ADEs following eMMS fell from 0.17 per admission to 0.05; a reduction of 71%. The annualized eMMS implementation, maintenance, and operating costs for the cardiology ward were A$61 741 (US$55 296). The estimated reduction in ADEs post eMMS was approximately 80 actual ADEs per year. The reduced costs associated with these ADEs were more than sufficient to offset the costs of the eMMS. Estimated savings resulting from eMMS implementation were A$63-66 (US$56-59) per admission (A$97 740-$102 000 per annum for this ward). Sensitivity analyses demonstrated results were robust when both eMMS effectiveness and costs of actual ADEs were varied substantially. The eMMS within this setting was more effective and less expensive than paper-based prescribing. Comparison with the few previous full economic evaluations available suggests a marked improvement in the cost-effectiveness of eMMS, largely driven by increased effectiveness of contemporary eMMs in reducing medication errors. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
How has our knowledge of dinosaur diversity through geologic time changed through research history?
Tennant, Jonathan P; Chiarenza, Alfio Alessandro; Baron, Matthew
2018-01-01
Assessments of dinosaur macroevolution at any given time can be biased by the historical publication record. Recent studies have analysed patterns in dinosaur diversity that are based on secular variations in the numbers of published taxa. Many of these have employed a range of approaches that account for changes in the shape of the taxonomic abundance curve, which are largely dependent on databases compiled from the primary published literature. However, how these 'corrected' diversity patterns are influenced by the history of publication remains largely unknown. Here, we investigate the influence of publication history between 1991 and 2015 on our understanding of dinosaur evolution using raw diversity estimates and shareholder quorum subsampling for the three major subgroups: Ornithischia, Sauropodomorpha, and Theropoda. We find that, while sampling generally improves through time, there remain periods and regions in dinosaur evolutionary history where diversity estimates are highly volatile (e.g. the latest Jurassic of Europe, the mid-Cretaceous of North America, and the Late Cretaceous of South America). Our results show that historical changes in database compilation can often substantially influence our interpretations of dinosaur diversity. 'Global' estimates of diversity based on the fossil record are often also based on incomplete, and distinct regional signals, each subject to their own sampling history. Changes in the record of taxon abundance distribution, either through discovery of new taxa or addition of existing taxa to improve sampling evenness, are important in improving the reliability of our interpretations of dinosaur diversity. Furthermore, the number of occurrences and newly identified dinosaurs is still rapidly increasing through time, suggesting that it is entirely possible for much of what we know about dinosaurs at the present to change within the next 20 years.
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.
2005-01-01
This report reviews some of the literature on the fracture strength, fracture toughness, and crack growth properties of chemical-vapor-deposited ZnSe. The literature was reviewed to determine if the existing data on ZnSe is adequate to design windows for the Flow Enclosure Accommodating Novel Investigations in Combustion of Solids (FEANICS) project. Unfortunately, most of the published reports do not give all of the necessary design parameters despite having measured the data to do so. Further, the original data is not available. The data tabulated herein was determined by digitizing plots in original reprints of the publications. Based on the published data, an estimate of the slow-crack-growth parameters for small cracks in 100 percent humidity was made. For 100 percent humidity, the slow-crack-growth parameters n and A for small crack (or single crystal) failure were estimated. Weibull moduli estimated from bending of beams and circular plates ranged from 4 to 9, while fracture strengths ranged from 29 MPa in water to 72 MPa in dry nitrogen. Fracture toughness measurements yielded ranges, with the lower values representing failure from small flaws within grains and the larger values representing macroscopic cracks. Much of the data analyzed exhibited significant scatter, and the standard deviations were very large.
Schmier, Jordana; Ogden, Kristine; Nickman, Nancy; Halpern, Michael T; Cifaldi, Mary; Ganguli, Arijit; Bao, Yanjun; Garg, Vishvas
2017-08-01
Many hospital-based infusion centers treat patients with rheumatoid arthritis (RA) with intravenous biologic agents, yet may have a limited understanding of the overall costs of infusion in this setting. The purposes of this study were to conduct a microcosting analysis from a hospital perspective and to develop a model using an activity-based costing approach for estimating costs associated with the provision of hospital-based infusion services (preparation, administration, and follow-up) in the United States for maintenance treatment of moderate to severe RA. A spreadsheet-based model was developed. Inputs included hourly wages, time spent providing care, supply/overhead costs, laboratory testing, infusion center size, and practice pattern information. Base-case values were derived from data from surveys, published studies, standard cost sources, and expert opinion. Costs are presented in year-2017 US dollars. The base case modeled a hospital infusion center serving patients with RA treated with abatacept, tocilizumab, infliximab, or rituximab. Estimated overall costs of infusions per patient per year were $36,663 (rituximab), $36,821 (tocilizumab), $44,973 (infliximab), and $46,532 (abatacept). Of all therapies, the biologic agents represented the greatest share of overall costs, ranging from 87% to $91% of overall costs per year. Excluding infusion drug costs, labor accounted for 53% to 57% of infusion costs. Biologic agents represented the highest single cost associated with RA infusion care; however, personnel, supplies, and overhead costs also contributed substantially to overall costs (8%-16%). This model may provide a helpful and adaptable framework for use by hospitals in informing decision making about services offered and their associated financial implications. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Muciño-Ortega, Emilio; Mould-Quevedo, Joaquín Federico; Farkouh, Raymond; Strutton, David
2011-01-01
Vaccination is an effective intervention for reduce child morbidity and mortality associated to pneumococcus. The availability of new anti-pneumococcal vaccines makes it necessary to evaluate its potential impact on public health and costs related to their implementation. The aim of this study was to estimate the cost-effectiveness and cost-utility of immunization strategies based on pneumococcal conjugated vaccines (PCV's) currently available in Mexico from a third payer perspective. A decision tree model was developed to assess both, economic and health impact, of anti-pneumococcal vaccination in children <2 years (lifetime time horizon, discount rate: 5% annual). Comparators were: no-vaccination (reference) and strategies based on 7, 10 and 13-valent PCV's. Effectiveness measures were: child deaths avoided, life-years gained (LYG) and quality adjusted life years (QALY's) gained. Effectiveness, utility, local epidemiology and cost of treating pneumococcal diseases were extracted from published sources. Univariate sensitivity analysis were performed. Immunization dominates no-vaccination: strategy based on 13-valent vaccine prevented 16.205 deaths, gained 331.230 LY's and 332.006 QALY's and saved US$1.307/child vaccinated. Strategies based on 7 and 10-valent PCV's prevented 13.806 and 5.589 deaths, gained 282.193 and 114.251 LY's, 282.969 and 114.972 QALY's and saved US$1.084 and US$731/child vaccinated, respectively. These results were robust to variations in herd immunity and lower immunogenicity of 10-valent vaccine. In Mexico, immunization strategies based on 7, 10 and 13-valent PCV's would be cost-saving interventions, however, health outcomes and savings of the strategy based on 13-valent vaccine are greater than those estimated for 7 and 10-valent PCV's. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data.
Dalponte, Michele; Coomes, David A
2016-10-01
Forests are a major component of the global carbon cycle, and accurate estimation of forest carbon stocks and fluxes is important in the context of anthropogenic global change. Airborne laser scanning (ALS) data sets are increasingly recognized as outstanding data sources for high-fidelity mapping of carbon stocks at regional scales.We develop a tree-centric approach to carbon mapping, based on identifying individual tree crowns (ITCs) and species from airborne remote sensing data, from which individual tree carbon stocks are calculated. We identify ITCs from the laser scanning point cloud using a region-growing algorithm and identifying species from airborne hyperspectral data by machine learning. For each detected tree, we predict stem diameter from its height and crown-width estimate. From that point on, we use well-established approaches developed for field-based inventories: above-ground biomasses of trees are estimated using published allometries and summed within plots to estimate carbon density.We show this approach is highly reliable: tests in the Italian Alps demonstrated a close relationship between field- and ALS-based estimates of carbon stocks ( r 2 = 0·98). Small trees are invisible from the air, and a correction factor is required to accommodate this effect.An advantage of the tree-centric approach over existing area-based methods is that it can produce maps at any scale and is fundamentally based on field-based inventory methods, making it intuitive and transparent. Airborne laser scanning, hyperspectral sensing and computational power are all advancing rapidly, making it increasingly feasible to use ITC approaches for effective mapping of forest carbon density also inside wider carbon mapping programs like REDD++.
[Cost-benefit analysis of primary prevention programs for mental health at the workplace in Japan].
Yoshimura, Kensuke; Kawakami, Norito; Tsusumi, Akizumi; Inoue, Akiomi; Kobayashi, Yuka; Takeuchi, Ayano; Fukuda, Takashi
2013-01-01
To determine the cost-benefits of primary prevention programs for mental health at the workplace, we conducted a meta-analysis of published studies in Japan. We searched the literature, published as of 16 November 2011, using the Pubmed database and relevant key words. The inclusion criteria were: conducted in the workplace in Japan; primary prevention focus; quasi-experimental studies or controlled trials; and outcomes including absenteeism or presenteeism. Four studies were identified: one participatory work environment improvement, one individual-oriented stress management, and two supervisor education programs. Costs and benefits in yen were estimated for each program, based on the description of the programs in the literature, and additional information from the authors. The benefits were estimated based on each program's effect on work performance (measured using the WHO Health and Work Performance Questionnaire in all studies), as well as sick leave days, if available. The estimated relative increase in work performance (%) in the intervention group compared to the control group was converted into labor cost using the average bonus (18% of the total annual salary) awarded to employees in Japan as a base. Sensitive analyses were conducted using different models of time-trend of intervention effects and 95% confidence limits of the relative increase in work performance. For the participatory work environment improvement program, the cost was estimated as 7,660 yen per employee, and the benefit was 15,200-22,800 yen per employee. For the individual-oriented stress management program, the cost was 9,708 yen per employee, and the benefit was 15,200-22,920 yen per employee. For supervisor education programs, the costs and benefits were respectively 5,209 and 4,400-6,600 yen per employee, in one study, 2,949 and zero yen per employee in the other study. The 95% confidence intervals were wide for all these studies. For the point estimates based on these cases, the participatory work environment improvement program and the individual-oriented stress management program showed better cost-benefits. For the supervisor education programs, the costs were almost equal to or greater than the benefits. The results of the present study suggest these primary prevention programs for mental health at the workplace are economically advantageous to employers. Because the 95% confidence intervals were wide, further research is needed to clarify if these interventions yield statistically significant cost-benefits.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Paris, Andrew; Kozma, Chris M.; Chow, Wing; Patel, Anisha M.; Mody, Samir H.; Kim, Myoung S.
2013-01-01
Background Few studies have estimated the economic effect of using an opioid that is associated with lower rates of gastrointestinal (GI) adverse events (AEs) than another opioid for postsurgical pain. Objective To estimate the number of postsurgical GI events and incremental hospital costs, including potential savings, associated with lower GI AE rates, for tapentadol immediate release (IR) versus oxycodone IR, using a literature-based calculator. Methods An electronic spreadsheet–based cost calculator was developed to estimate the total number of GI AEs (ie, nausea, vomiting, or constipation) and incremental costs to a hospital when using tapentadol IR 100 mg versus oxycodone IR 15 mg, in a hypothetical cohort of 1500 hospitalized patients requiring short-acting opioids for postsurgical pain. Data inputs were chosen from recently published, well-designed studies, including GI AE rates from a previously published phase 3 clinical trial of postsurgical patients who received these 2 opioids; GI event–related incremental length of stay from a large US hospital database; drug costs using wholesale acquisition costs in 2011 US dollars; and average hospitalization cost from the 2009 Healthcare Cost and Utilization Project database. The base case assumed that 5% (chosen as a conservative estimate) of patients admitted to the hospital would shift from oxycodone IR to tapentadol IR. Results In this hypothetical cohort of 1500 hospitalized patients, replacing 5% of oxycodone IR 15-mg use with tapentadol IR 100-mg use predicted reductions in the total number of GI events from 1095 to 1085, and in the total cost of GI AEs from $2,978,400 to $2,949,840. This cost reduction translates to a net savings of $22,922 after factoring in drug cost. For individual GI events, the net savings were $26,491 for nausea; $12,212 for vomiting; and $7187 for constipation. Conclusion Using tapentadol IR in place of a traditional μ-opioid shows the potential for reduced GI events and subsequent cost-savings in the postsurgical hospital setting. In the absence of sufficient real-world data, this literature-based cost calculator may assist hospital Pharmacy & Therapeutics committees in their evaluation of the costs of opioid-related GI events. PMID:24991383
Neuro-estimator based GMC control of a batch reactive distillation.
Prakash, K J Jithin; Patle, Dipesh S; Jana, Amiya K
2011-07-01
In this paper, an artificial neural network (ANN)-based nonlinear control algorithm is proposed for a simulated batch reactive distillation (RD) column. In the homogeneously catalyzed reactive process, an esterification reaction takes place for the production of ethyl acetate. The fundamental model has been derived incorporating the reaction term in the model structure of the nonreactive distillation process. The process operation is simulated at the startup phase under total reflux conditions. The open-loop process dynamics is also addressed running the batch process at the production phase under partial reflux conditions. In this study, a neuro-estimator based generic model controller (GMC), which consists of an ANN-based state predictor and the GMC law, has been synthesized. Finally, this proposed control law has been tested on the representative batch reactive distillation comparing with a gain-scheduled proportional integral (GSPI) controller and with its ideal performance (ideal GMC). Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Fluence-based and microdosimetric event-based methods for radiation protection in space
NASA Technical Reports Server (NTRS)
Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)
2002-01-01
The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.
Lopez, Anna Lena; You, Young Ae; Kim, Young Eun; Sah, Binod; Maskery, Brian; Clemens, John
2012-01-01
Abstract Objective To estimate the global burden of cholera using population-based incidence data and reports. Methods Countries with a recent history of cholera were classified as endemic or non-endemic, depending on whether they had reported cholera cases in at least three of the five most recent years. The percentages of the population in each country that lacked access to improved sanitation were used to compute the populations at risk for cholera, and incidence rates from published studies were applied to groups of countries to estimate the annual number of cholera cases in endemic countries. The estimates of cholera cases in non-endemic countries were based on the average numbers of cases reported from 2000 to 2008. Literature-based estimates of cholera case-fatality rates (CFRs) were used to compute the variance-weighted average cholera CFRs for estimating the number of cholera deaths. Findings About 1.4 billion people are at risk for cholera in endemic countries. An estimated 2.8 million cholera cases occur annually in such countries (uncertainty range: 1.4–4.3) and an estimated 87 000 cholera cases occur in non-endemic countries. The incidence is estimated to be greatest in children less than 5 years of age. Every year about 91 000 people (uncertainty range: 28 000 to 142 000) die of cholera in endemic countries and 2500 people die of the disease in non-endemic countries. Conclusion The global burden of cholera, as determined through a systematic review with clearly stated assumptions, is high. The findings of this study provide a contemporary basis for planning public health interventions to control cholera. PMID:22461716
2013-01-01
Background Excess accumulation of visceral fat is a prominent risk factor for cardiovascular and metabolic morbidity. While computed tomography (CT) is the gold standard to measure visceral adiposity, this is often not possible for large studies - thus valid, but less expensive and intrusive proxy measures of visceral fat are required such as dual-energy X-ray absorptiometry (DXA). Study aims were to a) identify a valid DXA-based measure of visceral adipose tissue (VAT), b) estimate VAT heritability and c) assess visceral fat association with morbidity in relation to body fat distribution. Methods A validation sample of 54 females measured for detailed body fat composition - assessed using CT, DXA and anthropometry – was used to evaluate previously published predictive models of CT-measured visceral fat. Based upon a validated model, we realised an out-of-sample estimate of abdominal VAT area for a study sample of 3457 female volunteer twins and estimated VAT area heritability using a classical twin study design. Regression and residuals analyses were used to assess the relationship between adiposity and morbidity. Results Published models applied to the validation sample explained >80% of the variance in CT-measured visceral fat. While CT visceral fat was best estimated using a linear regression for waist circumference, CT body cavity area and total abdominal fat (R2 = 0.91), anthropometric measures alone predicted VAT almost equally well (CT body cavity area and waist circumference, R2 = 0.86). Narrow sense VAT area heritability for the study sample was estimated to be 58% (95% CI: 51-66%) with a shared familial component of 24% (17-30%). VAT area is strongly associated with type 2 diabetes (T2D), hypertension (HT), subclinical atherosclerosis and liver function tests. In particular, VAT area is associated with T2D, HT and liver function (alanine transaminase) independent of DXA total abdominal fat and body mass index (BMI). Conclusions DXA and anthropometric measures can be utilised to derive estimates of visceral fat as a reliable alternative to CT. Visceral fat is heritable and appears to mediate the association between body adiposity and morbidity. This observation is consistent with hypotheses that suggest excess visceral adiposity is causally related to cardiovascular and metabolic disease. PMID:23552273
NASA Astrophysics Data System (ADS)
Lim, Sungwoo; Prabhu, Vibha Levin; Anand, Mahesh; Taylor, Lawrence A.
2018-05-01
The authors regret that because of an oversight, the published manuscript contained following errors (i) the estimated energy consumption for laser sintering was ten times larger than the real value as a result of incorrect unit conversion from J/mm2 ∗ thickness (μm) to kW h/m3; (ii) an inappropriate comparison with Benaroya (2010) as the estimation for energy consumption in Benaroya (2010) was based on a conventional furnace and NOT microwave heating. The revised text pertaining to paragraph 2 of Section 2.2.1, the last paragraph of Section 3.3 and Table 1 are provided below.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1980-01-01
A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.
Ammonia emission inventory for the state of Wyoming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchstetter, Thomas W.; Maser, Colette R.; Brown, Nancy J.
2003-12-17
Ammonia (NH{sub 3}) is the only significant gaseous base in the atmosphere and it has a variety of impacts as an atmospheric pollutant, including the formation of secondary aerosol particles: ammonium sulfate and ammonium nitrate. NH{sub 3} preferentially forms ammonium sulfate; consequently ammonium nitrate aerosol formation may be limited by the availability of NH{sub 3}. Understanding the impact of emissions of oxides of sulfur and nitrogen on visibility, therefore, requires accurately determined ammonia emission inventories for use in air quality models, upon which regulatory and policy decisions increasingly depend. This report presents an emission inventory of NH{sub 3} for themore » state of Wyoming. The inventory is temporally and spatially resolved at the monthly and county level, and is comprised of emissions from individual sources in ten categories: livestock, fertilizer, domestic animals, wild animals, wildfires, soil, industry, mobile sources, humans, and publicly owned treatment works. The Wyoming NH{sub 3} inventory was developed using the Carnegie Mellon University (CMU) Ammonia Model as framework. Current Wyoming-specific activity data and emissions factors obtained from state agencies and published literature were assessed and used as inputs to the CMU Ammonia Model. Biogenic emissions from soils comprise about three-quarters of the Wyoming NH{sub 3} inventory, though emission factors from soils are highly uncertain. Published emission factors are scarce and based on limited measurements. In Wyoming, agricultural land, rangeland, and forests comprise 96% of the land area and essentially all of the estimated emissions from soils. Future research on emission rates of NH{sub 3} for these land categories may lead to a substantial change in the magnitude of soil emissions, a different inventory composition, and reduced uncertainty in the inventory. While many NH{sub 3} inventories include annual emissions, air quality modeling studies require finer temporal resolution. Published studies indicate higher emission rates from soils and animal wastes at higher temperatures, and temporal variation in fertilizer application. A recent inverse modeling study indicates temporal variation in regional NH{sub 3} emissions. Monthly allocation factors were derived to estimate monthly emissions from soils, livestock and wild animal waste based on annual emission estimates. Monthly resolution of NH{sub 3} emissions from fertilizers is based on fertilizer sales to farmers. Statewide NH{sub 3} emissions are highest in the late spring and early summer months.« less
Chen, Wei-Yu; Liao, Chung-Min
2012-11-01
The purpose of this study was to link toxicokinetics/toxicodynamics (TK/TD) and bioavailability-based metal uptake kinetics to assess arsenic (As) uptake and bioaccumulation in three common farmed species of tilapia (Oreochromis mossambicus), milkfish (Chanos chanos), and freshwater clam (Corbicula fluminea). We developed a mechanistic framework by linking damage assessment model (DAM) and bioavailability-based Michaelis-Menten model for describing TK/TD and As uptake mechanisms. The proposed model was verified with published acute toxicity data. The estimated TK/TD parameters were used to simulate the relationship between bioavailable As uptake and susceptibility probability. The As toxicity was also evaluated based on a constructed elimination-recovery scheme. Absorption rate constants were estimated to be 0.025, 0.016, and 0.175 mL g(-1) h(-1) and As uptake rate constant estimates were 22.875, 63.125, and 788.318 ng g(-1) h(-1) for tilapia, milkfish, and freshwater clam, respectively. Here we showed that a potential trade-off between capacities of As elimination and damage recovery was found among three farmed species. Moreover, the susceptibility probability can also be estimated by the elimination-recovery relations. This study suggested that bioavailability-based uptake kinetics and TK/TD-based DAM could be integrated for assessing metal uptake and toxicity in aquatic organisms. This study is useful to quantitatively assess the complex environmental behavior of metal uptake and implicate to risk assessment of metals in aquaculture systems.
Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.
2013-01-01
This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151
Estimating the surface area of birds: using the homing pigeon (Columba livia) as a model.
Perez, Cristina R; Moye, John K; Pritsos, Chris A
2014-05-08
Estimation of the surface area of the avian body is valuable for thermoregulation and metabolism studies as well as for assessing exposure to oil and other surface-active organic pollutants from a spill. The use of frozen carcasses for surface area estimations prevents the ability to modify the posture of the bird. The surface area of six live homing pigeons in the fully extended flight position was estimated using a noninvasive method. An equation was derived to estimate the total surface area of a pigeon based on its body weight. A pigeon's surface area in the fully extended flight position is approximately 4 times larger than the surface area of a pigeon in the perching position. The surface area of a bird is dependent on its physical position, and, therefore, the fully extended flight position exhibits the maximum area of a bird and should be considered the true surface area of a bird. © 2014. Published by The Company of Biologists Ltd | Biology Open.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
McNeil, D E; Brown, M; Ching, A; DeBaun, M R
2001-10-01
We undertook a cost-benefit analysis of screening for Wilms tumor and hepatoblastoma in children with Beckwith-Wiedemann syndrome (BWS), a known cancer predisposition syndrome. The purpose of this analysis was twofold: first, to assess whether screening in children with BWS has the potential to be cost-effective; second, if screening appears to be cost-effective, to determine which parameters would be most important to assess if a screening trial were initiated. We used data from the BWS registry at the National Cancer Institute, the National Wilms Tumor Study (NWTS), and large published series to model events for two hypothetical cohorts of 1,000 infants born with BWS. One hypothetical cohort was screened for cancer until a predetermined age, representing the base case. The other cohort was unscreened. For our base case, we assumed: (a) sonography examinations three times yearly (triannually) from birth until 7 years of age; (b) screening would result in one stage shift downward at diagnosis for Wilms tumor and hepatoblastoma; (c) 100% sensitivity and 95% specificity for detecting clinical stage I Wilms tumor and hepatoblastoma; (d) a 3% discount rate; (e) a false positive result cost of $402. We estimated mortality rates based on published Wilms tumor and hepatoblastoma stage specific survival. Using the base case, screening a child with BWS from birth until 4 years of age results in a cost per life year saved of $9,642 while continuing until 7 years of age results in a cost per life-year saved of $14,740. When variables such as cost of screening examination, discount rate, and effectiveness of screening were varied based on high and low estimates, the incremental cost per life-year saved for screening up until age four remained comparable to acceptable population based cancer screening ranges (< $50,000 per life year saved). Under our model's assumptions, abdominal sonography examinations in children with BWS represent a reasonable strategy for a cancer screening program. A cancer screening trial is warranted to determine if, when, and how often children with BWS should be screened and to determine cost-effectiveness in clinical practice. Published 2001 Wiley-Liss, Inc.
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
Time cost of child rearing and its effect on women's uptake of free health checkups in Japan.
Anezaki, Hisataka; Hashimoto, Hideki
2018-05-01
Women of child-rearing age have the lowest uptake rates for health checkups in several developed countries. The time cost incurred by conflicting child-rearing roles may contribute to this gap in access to health checkups. We estimated the time cost of child rearing empirically, and analyzed its potential impact on uptake of free health checkups based on a sample of 1606 women with a spouse/partner from the dataset of a population-based survey conducted in the greater Tokyo metropolitan area in 2010. We used a selection model to estimate the counterfactual wage of non-working mothers, and estimated the number of children using a simultaneous equation model to account for the endogeneity between job participation and child rearing. The time cost of child rearing was obtained based on the estimated effects of women's wages and number of children on job participation. We estimated the time cost to mothers of rearing a child aged 0-3 years as 16.9 USD per hour, and the cost for a child aged 4-5 years as 15.0 USD per hour. Based on this estimation, the predicted uptake rate of women who did not have a child was 61.7%, while the predicted uptake rates for women with a child aged 0-3 and 4-5 were 54.2% and 58.6%, respectively. These results suggest that, although Japanese central/local governments provide free health checkup services, this policy does not fully compensate for the time cost of child rearing. It is strongly recommended that policies should be developed to address the time cost of child rearing, with the aim of closing the gender gap and securing universal access to preventive healthcare services in Japan. Copyright © 2018. Published by Elsevier Ltd.
Friesen, Melissa C; Bassig, Bryan A; Vermeulen, Roel; Shu, Xiao-Ou; Purdue, Mark P; Stewart, Patricia A; Xiang, Yong-Bing; Chow, Wong-Ho; Ji, Bu-Tian; Yang, Gong; Linet, Martha S; Hu, Wei; Gao, Yu-Tang; Zheng, Wei; Rothman, Nathaniel; Lan, Qing
2017-01-01
To provide insight into the contributions of exposure measurements to job exposure matrices (JEMs), we examined the robustness of an association between occupational benzene exposure and non-Hodgkin lymphoma (NHL) to varying exposure assessment methods. NHL risk was examined in a prospective population-based cohort of 73087 women in Shanghai. A mixed-effects model that combined a benzene JEM with >60000 short-term, area benzene inspection measurements was used to derive two sets of measurement-based benzene estimates: 'job/industry-specific' estimates (our presumed best approach) were derived from the model's fixed effects (year, JEM intensity rating) and random effects (occupation, industry); 'calibrated JEM' estimates were derived using only the fixed effects. 'Uncalibrated JEM' (using the ordinal JEM ratings) and exposure duration estimates were also calculated. Cumulative exposure for each subject was calculated for each approach based on varying exposure definitions defined using the JEM's probability ratings. We examined the agreement between the cumulative metrics and evaluated changes in the benzene-NHL associations. For our primary exposure definition, the job/industry-specific estimates were moderately to highly correlated with all other approaches (Pearson correlation 0.61-0.89; Spearman correlation > 0.99). All these metrics resulted in statistically significant exposure-response associations for NHL, with negligible gain in model fit from using measurement-based estimates. Using more sensitive or specific exposure definitions resulted in elevated but non-significant associations. The robust associations observed here with varying benzene assessment methods provide support for a benzene-NHL association. While incorporating exposure measurements did not improve model fit, the measurements allowed us to derive quantitative exposure-response curves. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
The development of open access journal publishing from 1993 to 2009.
Laakso, Mikael; Welling, Patrik; Bukvova, Helena; Nyman, Linus; Björk, Bo-Christer; Hedlund, Turid
2011-01-01
Open Access (OA) is a model for publishing scholarly peer reviewed journals, made possible by the Internet. The full text of OA journals and articles can be freely read, as the publishing is funded through means other than subscriptions. Empirical research concerning the quantitative development of OA publishing has so far consisted of scattered individual studies providing brief snapshots, using varying methods and data sources. This study adopts a systematic method for studying the development of OA journals from their beginnings in the early 1990s until 2009. Because no comprehensive index of OA articles exists, systematic manual data collection from journal web sites was conducted based on journal-level data extracted from the Directory of Open Access Journals (DOAJ). Due to the high number of journals registered in the DOAJ, almost 5000 at the time of the study, stratified random sampling was used. A separate sample of verified early pioneer OA journals was also studied. The results show a very rapid growth of OA publishing during the period 1993-2009. During the last year an estimated 191 000 articles were published in 4769 journals. Since the year 2000, the average annual growth rate has been 18% for the number of journals and 30% for the number of articles. This can be contrasted to the reported 3,5% yearly volume increase in journal publishing in general. In 2009 the share of articles in OA journals, of all peer reviewed journal articles, reached 7,7%. Overall, the results document a rapid growth in OA journal publishing over the last fifteen years. Based on the sampling results and qualitative data a division into three distinct periods is suggested: The Pioneering years (1993-1999), the Innovation years (2000-2004), and the Consolidation years (2005-2009).
The Development of Open Access Journal Publishing from 1993 to 2009
Laakso, Mikael; Welling, Patrik; Bukvova, Helena; Nyman, Linus; Björk, Bo-Christer; Hedlund, Turid
2011-01-01
Open Access (OA) is a model for publishing scholarly peer reviewed journals, made possible by the Internet. The full text of OA journals and articles can be freely read, as the publishing is funded through means other than subscriptions. Empirical research concerning the quantitative development of OA publishing has so far consisted of scattered individual studies providing brief snapshots, using varying methods and data sources. This study adopts a systematic method for studying the development of OA journals from their beginnings in the early 1990s until 2009. Because no comprehensive index of OA articles exists, systematic manual data collection from journal web sites was conducted based on journal-level data extracted from the Directory of Open Access Journals (DOAJ). Due to the high number of journals registered in the DOAJ, almost 5000 at the time of the study, stratified random sampling was used. A separate sample of verified early pioneer OA journals was also studied. The results show a very rapid growth of OA publishing during the period 1993–2009. During the last year an estimated 191 000 articles were published in 4769 journals. Since the year 2000, the average annual growth rate has been 18% for the number of journals and 30% for the number of articles. This can be contrasted to the reported 3,5% yearly volume increase in journal publishing in general. In 2009 the share of articles in OA journals, of all peer reviewed journal articles, reached 7,7%. Overall, the results document a rapid growth in OA journal publishing over the last fifteen years. Based on the sampling results and qualitative data a division into three distinct periods is suggested: The Pioneering years (1993–1999), the Innovation years (2000–2004), and the Consolidation years (2005–2009). PMID:21695139
Estimating the functional dimensionality of neural representations.
Ahlheim, Christiane; Love, Bradley C
2018-06-07
Recent advances in multivariate fMRI analysis stress the importance of information inherent to voxel patterns. Key to interpreting these patterns is estimating the underlying dimensionality of neural representations. Dimensions may correspond to psychological dimensions, such as length and orientation, or involve other coding schemes. Unfortunately, the noise structure of fMRI data inflates dimensionality estimates and thus makes it difficult to assess the true underlying dimensionality of a pattern. To address this challenge, we developed a novel approach to identify brain regions that carry reliable task-modulated signal and to derive an estimate of the signal's functional dimensionality. We combined singular value decomposition with cross-validation to find the best low-dimensional projection of a pattern of voxel-responses at a single-subject level. Goodness of the low-dimensional reconstruction is measured as Pearson correlation with a test set, which allows to test for significance of the low-dimensional reconstruction across participants. Using hierarchical Bayesian modeling, we derive the best estimate and associated uncertainty of underlying dimensionality across participants. We validated our method on simulated data of varying underlying dimensionality, showing that recovered dimensionalities match closely true dimensionalities. We then applied our method to three published fMRI data sets all involving processing of visual stimuli. The results highlight three possible applications of estimating the functional dimensionality of neural data. Firstly, it can aid evaluation of model-based analyses by revealing which areas express reliable, task-modulated signal that could be missed by specific models. Secondly, it can reveal functional differences across brain regions. Thirdly, knowing the functional dimensionality allows assessing task-related differences in the complexity of neural patterns. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Direct costs of unintended pregnancy in the Russian federation.
Lowin, Julia; Jarrett, James; Dimova, Maria; Ignateva, Victoria; Omelyanovsky, Vitaly; Filonenko, Anna
2015-02-01
In 2010, almost every third pregnancy in Russia was terminated, indicating that unintended pregnancy (UP) is a public health problem. The aim of this study was to estimate the direct cost of UP to the healthcare system in Russia and the proportion attributable to using unreliable contraception. A cost model was built, adopting a generic payer perspective with a 1-year time horizon. The analysis cohort was defined as women of childbearing age between 18 and 44 years actively seeking to avoid pregnancy. Model inputs were derived from published sources or government statistics with a 2012 cost base. To estimate the number of UPs attributable to unreliable methods, the model combined annual typical use failure rates and age-adjusted utilization for each contraceptive method. Published survey data was used to adjust the total cost of UP by the number of UPs that were mistimed rather than unwanted. Scenario analysis considered alternate allocation of methods to the reliable and unreliable categories and estimate of the burden of UP in the target sub-group of women aged 18-29 years. The model estimated 1,646,799 UPs in the analysis cohort (women aged 18-44 years) with an associated annual cost of US$783 million. The model estimated 1,019,371 UPs in the target group of 18-29 years, of which 88 % were attributable to unreliable contraception. The total cost of UPs in the target group was estimated at approximately US$498 million, of which US$441 million could be considered attributable to the use of unreliable methods. The cost of UP attributable to use of unreliable contraception in Russia is substantial. Policies encouraging use of reliable contraceptive methods could reduce the burden of UP.
Estimating reliable paediatric reference intervals in clinical chemistry and haematology.
Ridefelt, Peter; Hellberg, Dan; Aldrimer, Mattias; Gustafsson, Jan
2014-01-01
Very few high-quality studies on paediatric reference intervals for general clinical chemistry and haematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The present review summarises current reference interval studies for common clinical chemistry and haematology analyses. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Environmental Assessment: T-10 Hush House Tinker Air Force Base, Oklahoma
2008-07-01
the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden...require modification of Tinker AFB’s current permits. PUBLIC COMMENTS: A Notice of Availability for public review of the Draft EA was published in the
The Economic Impact of Domestic Military Installations on Regional Economies.
1979-12-01
to implement the National Environmental Protection Act. The research examined the theoretical basis for impact determination especially economic base...installation on a regional economw. Such impacts ore reuirtd to be estimated to implement the National Environmental Protection Act. The research examined the...Published in the Second Proliminarw Draft Environmental Impact Statement Part I Fort Ord CREF 21]. E. ORGANIZATION OF THE STUDY The background of interest in
[Study of the effect of thiosemicarbazones against Toxoplasma gondii].
Gomes, Marco Antônio G B; Carreira, Gabriela M; Souza, Daniela P V; Nogueira, Paulo Marcos R; de Melo, Edésio J T; Maria, Edmilson J
2013-04-01
Toxoplasmosis is a neglected disease, with an estimated occurrence of one-third of the population worldwide. Research in medicinal chemistry has for some years been pursuing the development of new drugs against toxoplasmosis, because current treatments cause serious side effects in the patient. The use of thiosemicarbazones as an alternative option for the treatment of various diseases has been published in recent years, due to their, among others, anticancer, antimalarial, antitrypanosomal, antibacterial, and antitoxoplasmosis activities, the latter being the subject of this study, which is based upon biological analyses and tests of the response of Toxoplasma gondii in the presence of thiosemicarbazones. Copyright © 2013 Académie des sciences. Published by Elsevier SAS. All rights reserved.
A Quantitative Description of Suicide Inhibition of Dichloroacetic Acid in Rats and Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keys, Deborah A.; Schultz, Irv R.; Mahle, Deirdre A.
Dichloroacetic acid (DCA), a minor metabolite of trichloroethylene (TCE) and water disinfection byproduct, remains an important risk assessment issue because of its carcinogenic potency. DCA has been shown to inhibit its own metabolism by irreversibly inactivating glutathione transferase zeta (GSTzeta). To better predict internal dosimetry of DCA, a physiologically based pharmacokinetic (PBPK) model of DCA was developed. Suicide inhibition was described dynamically by varying the rate of maximal GSTzeta mediated metabolism of DCA (Vmax) over time. Resynthesis (zero-order) and degradation (first-order) of metabolic activity were described. Published iv pharmacokinetic studies in native rats were used to estimate an initial Vmaxmore » value, with Km set to an in vitro determined value. Degradation and resynthesis rates were set to estimated values from a published immunoreactive GSTzeta protein time course. The first-order inhibition rate, kd, was estimated to this same time course. A secondary, linear non-GSTzeta-mediated metabolic pathway is proposed to fit DCA time courses following treatment with DCA in drinking water. The PBPK model predictions were validated by comparing predicted DCA concentrations to measured concentrations in published studies of rats pretreated with DCA following iv exposure to 0.05 to 20 mg/kg DCA. The same model structure was parameterized to simulate DCA time courses following iv exposure in native and pretreated mice. Blood and liver concentrations during and postexposure to DCA in drinking water were predicted. Comparisons of PBPK model predicted to measured values were favorable, lending support for the further development of this model for application to DCA or TCE human health risk assessment.« less
Accurate age estimation in small-scale societies
Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.
2017-01-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282
Chloramine demand estimation using surrogate chemical and microbiological parameters.
Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose
2017-07-01
A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.
Madenijian, C.P.; David, S.R.; Krabbenhoft, D.P.
2012-01-01
Based on a laboratory experiment, we estimated the net trophic transfer efficiency of methylmercury to lake trout Salvelinus namaycush from its prey to be equal to 76.6 %. Under the assumption that gross trophic transfer efficiency of methylmercury to lake trout from its prey was equal to 80 %, we estimated that the rate at which lake trout eliminated methylmercury was 0.000244 day−1. Our laboratory estimate of methylmercury elimination rate was 5.5 times lower than the value predicted by a published regression equation developed from estimates of methylmercury elimination rates for fish available from the literature. Thus, our results, in conjunction with other recent findings, suggested that methylmercury elimination rates for fish have been overestimated in previous studies. In addition, based on our laboratory experiment, we estimated that the net trophic transfer efficiency of inorganic mercury to lake trout from its prey was 63.5 %. The lower net trophic transfer efficiency for inorganic mercury compared with that for methylmercury was partly attributable to the greater elimination rate for inorganic mercury. We also found that the efficiency with which lake trout retained either methylmercury or inorganic mercury from their food did not appear to be significantly affected by the degree of their swimming activity.
Accurate age estimation in small-scale societies.
Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G
2017-08-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.
MoisturEC: A New R Program for Moisture Content Estimation from Electrical Conductivity Data.
Terry, Neil; Day-Lewis, Frederick D; Werkema, Dale; Lane, John W
2018-03-06
Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data-analysis tools are needed to "translate" geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user-friendly tools are required to fully capitalize on the potential of geophysical information for soil-moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two- and three-dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
Yang, Y-M; Lee, J; Kim, Y-I; Cho, B-H; Park, S-B
2014-08-01
This study aimed to determine the viability of using axial cervical vertebrae (ACV) as biological indicators of skeletal maturation and to build models that estimate ossification level with improved explanatory power over models based only on chronological age. The study population comprised 74 female and 47 male patients with available hand-wrist radiographs and cone-beam computed tomography images. Generalized Procrustes analysis was used to analyze the shape, size, and form of the ACV regions of interest. The variabilities of these factors were analyzed by principal component analysis. Skeletal maturation was then estimated using a multiple regression model. Separate models were developed for male and female participants. For the female estimation model, the adjusted R(2) explained 84.8% of the variability of the Sempé maturation level (SML), representing a 7.9% increase in SML explanatory power over that using chronological age alone (76.9%). For the male estimation model, the adjusted R(2) was over 90%, representing a 1.7% increase relative to the reference model. The simplest possible ACV morphometric information provided a statistically significant explanation of the portion of skeletal-maturation variability not dependent on chronological age. These results verify that ACV is a strong biological indicator of ossification status. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Kusano, Maggie; Caldwell, Curtis B
2014-07-01
A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist in improving the accuracy and tractability of dose and shielding calculations for nuclear medicine facility design.
Comparison of Natural Gas Storage Estimates from the EIA and AGA
1997-01-01
The Energy Information Administration (EIA) has been publishing monthly storage information for years. In order to address the need for more timely information, in 1994 the American Gas Association (AGA) began publishing weekly storage levels. Both the EIA and the AGA series provide estimates of the total working gas in storage, but use significantly different methodologies.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.
2017-01-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.
Hogan, Thomas J
2012-05-01
The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.
Hargrove, John W; van Schalkwyk, Cari; Humphrey, Jean H; Mutasa, Kuda; Ntozini, Robert; Owen, Sherry Michele; Masciotra, Silvina; Parekh, Bharat S; Duong, Yen T; Dobbs, Trudy; Kilmarx, Peter H; Gonese, Elizabeth
2017-09-01
Laboratory assays that identify recent HIV infections are important for assessing impacts of interventions aimed at reducing HIV incidence. Kinetics of HIV humoral responses can vary with inherent assay properties, and between HIV subtypes, populations, and physiological states. They are important in determining mean duration of recent infection (MDRI) for antibody-based assays for detecting recent HIV infections. We determined MDRIs for multi-subtype peptide representing subtypes B, E and D (BED)-capture enzyme immunoassay, limiting antigen (LAg), and Bio-Rad Avidity Incidence (BRAI) assays for 101 seroconverting postpartum women, recruited in Harare from 1997 to 2000 during the Zimbabwe Vitamin A for Mothers and Babies trial, comparing them against published MDRIs estimated from seroconverting cases in the general population. We also compared MDRIs for women who seroconverted either during the first 9 months, or at later stages, postpartum. At cutoffs (C) of 0.8 for BED, 1.5 for LAg, and 40% for BRAI, estimated MDRIs for postpartum mothers were 192, 104, and 144 days, 33%, 32%, and 52% lower than published estimates of 287, 152 and 298 days, respectively, for clade C samples from general populations. Point estimates of MDRI values were 7%-19% shorter for women who seroconverted in the first 9 months postpartum than for those seroconverting later. MDRI values for three HIV incidence biomarkers are longer in the general population than among postpartum women, particularly those who recently gave birth, consistent with heightened immunological activation soon after birth. Our results provide a caution that MDRI may vary significantly between subjects in different physiological states.
de Hoyos-Alonso, M C; Bonis, J; Tapias-Merino, E; Castell, M V; Otero, A
2016-01-01
The progressive rise in dementia prevalence increases the need for rapid methods that complement population-based prevalence studies. To estimate the prevalence of dementia in the population aged 65 and older based on use of cholinesterase inhibitors and memantine. Descriptive study of use and prescription of cholinesterase inhibitors and/or memantine in 2011 according to 2 databases: Farm@drid (pharmacy billing records for the Region of Madrid) and BIFAP (database for pharmacoepidemiology research in primary care, with diagnosis and prescription records). We tested the comparability of drug use results from each database using the chi-square test and prevalence ratios. The prevalence of dementia in Madrid was estimated based on the dose per 100 inhabitants/day, adjusting the result for data obtained from BIFAP on combination treatment in the general population (0.37%) and the percentage of dementia patients undergoing treatment (41.13%). Cholinesterase inhibitors and memantine were taken by 2.08% and 0.72% of Madrid residents aged 65 and older was respectively. Both databases displayed similar results for use of these drugs. The estimated prevalence of dementia in individuals aged 65 and older is 5.91% (95% CI%, 5.85-5.95) (52 287 people), and it is higher in women (7.16%) than in men (4.00%). The estimated prevalence of dementia is similar to that found in population-based studies. Analysing consumption of specific dementia drugs can be a reliable and inexpensive means of updating prevalence data periodically and helping rationalise healthcare resources. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.
Barraj, Leila; Murphy, Mary; Tran, Nga; Petersen, Barbara
2016-08-01
Identity, stability, purity, intended use levels in what foods and technical effects, and probable intake are among the key components in an assessment to support GRAS determinations. The specifications of identity of a food substance are an important component of the safety assessment as changes in the physical and chemical properties of a food substance can influence its technical effect in food and can influence its nutritional or toxicological properties of the food substance. Estimating exposure is a key determining step in the safety evaluation of a food substance. Intake assessment in GRAS determination is necessarily comprehensive based on cumulative exposure, i.e. proposed new uses plus background dietary exposure. Intake estimates for safety assurance in a GRAS determination also represent conservative overestimate of chronic exposure as they are based on 2-day average daily intake and the upper percentile (90th) intake among consumers. In contrast, in a nutrient assessment where realistic intake estimates are of interest, usual intake estimates are relied upon. It should also be noted that intake estimates for GRAS determinations are also more conservative than estimate of dietary exposure by EPA (FIFRA), where mean per capita are used to assess chronic exposure. Overall, for safety assurance, intake assessments in GRAS determinations are comprehensively cumulative and typically conservative overestimate of exposures. Copyright © 2016. Published by Elsevier Inc.
Historical review of and current progress in coal-resource estimation in the United States.
Wood, G.H.
1981-01-01
Nine estimates of US coal resources have been published in the past 71yr. Although many details of these estimates differ markedly, those for 1913, 1922, and 1974 are surprisingly similar. Some differences are due to increased data, others reflect changes in terminology, definitions, criteria, guidelines, and methodologies. Thus many early estimates are not particularly useful in modern resource assessments. Preliminary definitions that are being prepared in 1980 by the US Geological Survey are compared with those published in 1976 and currently in use. Anticipated results of the new definitions are: 1) to lessen existing confusion about estimation procedures; 2) to make such procedures easier and more precise; 3) to promote use of a commonly accepted terminology accompanied by standardized definitions, criteria, guidelines, and methodologies for estimating resources. -Author
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
GONe: Software for estimating effective population size in species with generational overlap
Coombs, J.A.; Letcher, B.H.; Nislow, K.H.
2012-01-01
GONe is a user-friendly, Windows-based program for estimating effective size (N e) in populations with overlapping generations. It uses the Jorde-Ryman modification to the temporal method to account for age structure in populations. This method requires estimates of age-specific survival and birth rate and allele frequencies measured in two or more consecutive cohorts. Allele frequencies are acquired by reading in genotypic data from files formatted for either GENEPOP or TEMPOFS. For each interval between consecutive cohorts, N e is estimated at each locus and over all loci. Furthermore, N e estimates are output for three different genetic drift estimators (F s, F c and F k). Confidence intervals are derived from a chi-square distribution with degrees of freedom equal to the number of independent alleles. GONe has been validated over a wide range of N e values, and for scenarios where survival and birth rates differ between sexes, sex ratios are unequal and reproductive variances differ. GONe is freely available for download at. ?? 2011 Blackwell Publishing Ltd.
The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation
French, Michael T.; Fang, Hai
2010-01-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than ten years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost of society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. PMID:20071107
The cost of crime to society: new crime-specific estimates for policy and program evaluation.
McCollister, Kathryn E; French, Michael T; Fang, Hai
2010-04-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Rappazzo, Kristen M; Lobdell, Danelle T; Messer, Lynne C; Poole, Charles; Daniels, Julie L
2017-02-01
Estimating gestational age is usually based on date of last menstrual period (LMP) or clinical estimation (CE); both approaches introduce potential bias. Differences in methods of estimation may lead to misclassification and inconsistencies in risk estimates, particularly if exposure assignment is also gestation-dependent. This paper examines a 'what-if' scenario in which alternative methods are used and attempts to elucidate how method choice affects observed results. We constructed two 20-week gestational age cohorts of pregnancies between 2000 and 2005 (New Jersey, Pennsylvania, Ohio, USA) using live birth certificates: one defined preterm birth (PTB) status using CE and one using LMP. Within these, we estimated risk for 4 categories of preterm birth (PTBs per 10 6 pregnancies) and risk differences (RD (95% CIs)) associated with exposure to particulate matter (PM 2.5 ). More births were classified preterm using LMP (16%) compared with CE (8%). RD divergences increased between cohorts as exposure period approached delivery. Among births between 28 and 31 weeks, week 7 PM 2.5 exposure conveyed RDs of 44 (21 to 67) for CE and 50 (18 to 82) for LMP populations, while week 24 exposure conveyed RDs of 33 (11 to 56) and -20 (-50 to 10), respectively. Different results from analyses restricted to births with both CE and LMP are most likely due to differences in dating methods rather than selection issues. Results are sensitive to choice of gestational age estimation, though degree of sensitivity can vary by exposure timing. When both outcome and exposure depend on estimate of gestational age, awareness of nuances in the method used for estimation is critical. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Gebbink, Wouter A; Berger, Urs; Cousins, Ian T
2015-01-01
Contributions of direct and indirect (via precursors) pathways of human exposure to perfluorooctane sulfonic acid (PFOS) isomers and perfluoroalkyl carboxylic acids (PFCAs) are estimated using a Scenario-Based Risk Assessment (SceBRA) modelling approach. Monitoring data published since 2008 (including samples from 2007) are used. The estimated daily exposures (resulting from both direct and precursor intake) for the general adult population are highest for PFOS and perfluorooctanoic acid (PFOA), followed by perfluorohexanoic acid (PFHxA) and perfluorodecanoic acid (PFDA), while lower daily exposures are estimated for perfluorobutanoic acid (PFBA) and perfluorododecanoic acid (PFDoDA). The precursor contributions to the individual perfluoroalkyl acid (PFAA) daily exposures are estimated to be 11-33% for PFOS, 0.1-2.5% for PFBA, 3.7-34% for PFHxA, 13-64% for PFOA, 5.2-66% for PFDA, and 0.7-25% for PFDoDA (ranges represent estimated precursor contributions in a low- and high-exposure scenario). For PFOS, direct intake via diet is the major exposure pathway regardless of exposure scenario. For PFCAs, the dominant exposure pathway is dependent on perfluoroalkyl chain length and exposure scenario. Modelled PFOS and PFOA concentrations in human serum using the estimated intakes from an intermediate-exposure scenario are in agreement with measured concentrations in different populations. The isomer pattern of PFOS resulting from total intakes (direct and via precursors) is estimated to be enriched with linear PFOS (84%) relative to technical PFOS (70% linear). This finding appears to be contradictory to the observed enrichment of branched PFOS isomers in recent human serum monitoring studies and suggests that either external exposure is not fully understood (e.g. there are unknown precursors, missing or poorly quantified exposure pathways) and/or that there is an incomplete understanding of the isomer-specific human pharmacokinetic processes of PFOS, its precursors and intermediates. Copyright © 2014. Published by Elsevier Ltd.
Roshanov, Pavel S; Walsh, Michael; Devereaux, P J; MacNeil, S Danielle; Lam, Ngan N; Hildebrand, Ainslie M; Acedillo, Rey R; Mrkobrada, Marko; Chow, Clara K; Lee, Vincent W; Thabane, Lehana; Garg, Amit X
2017-01-09
The Revised Cardiac Risk Index (RCRI) is a popular classification system to estimate patients' risk of postoperative cardiac complications based on preoperative risk factors. Renal impairment, defined as serum creatinine >2.0 mg/dL (177 µmol/L), is a component of the RCRI. The estimated glomerular filtration rate has become accepted as a more accurate indicator of renal function. We will externally validate the RCRI in a modern cohort of patients undergoing non-cardiac surgery and update its renal component. The Vascular Events in Non-cardiac Surgery Patients Cohort Evaluation (VISION) study is an international prospective cohort study. In this prespecified secondary analysis of VISION, we will test the risk estimation performance of the RCRI in ∼34 000 participants who underwent elective non-cardiac surgery between 2007 and 2013 from 29 hospitals in 15 countries. Using data from the first 20 000 eligible participants (the derivation set), we will derive an optimal threshold for dichotomising preoperative renal function quantified using the Chronic Kidney Disease Epidemiology Collaboration (CKD-Epi) glomerular filtration rate estimating equation in a manner that preserves the original structure of the RCRI. We will also develop a continuous risk estimating equation integrating age and CKD-Epi with existing RCRI risk factors. In the remaining (approximately) 14 000 participants, we will compare the risk estimation for cardiac complications of the original RCRI to this modified version. Cardiac complications will include 30-day non-fatal myocardial infarction, non-fatal cardiac arrest and death due to cardiac causes. We have examined an early sample to estimate the number of events and the distribution of predictors and missing data, but have not seen the validation data at the time of writing. The research ethics board at each site approved the VISION protocol prior to recruitment. We will publish our results and make our models available online at http://www.perioperativerisk.com. ClinicalTrials.gov NCT00512109. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F
2018-01-01
Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.
Population entropies estimates of proteins
NASA Astrophysics Data System (ADS)
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G
2015-01-01
To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.
Nair, Harish; Brooks, W Abdullah; Katz, Mark; Roca, Anna; Berkley, James A; Madhi, Shabir A; Simmerman, James Mark; Gordon, Aubree; Sato, Masatoki; Howie, Stephen; Krishnan, Anand; Ope, Maurice; Lindblade, Kim A; Carosone-Link, Phyllis; Lucero, Marilla; Ochieng, Walter; Kamimoto, Laurie; Dueger, Erica; Bhat, Niranjan; Vong, Sirenda; Theodoratou, Evropi; Chittaganpitch, Malinee; Chimah, Osaretin; Balmaseda, Angel; Buchy, Philippe; Harris, Eva; Evans, Valerie; Katayose, Masahiko; Gaur, Bharti; O'Callaghan-Gordo, Cristina; Goswami, Doli; Arvelo, Wences; Venter, Marietjie; Briese, Thomas; Tokarz, Rafal; Widdowson, Marc-Alain; Mounts, Anthony W; Breiman, Robert F; Feikin, Daniel R; Klugman, Keith P; Olsen, Sonja J; Gessner, Bradford D; Wright, Peter F; Rudan, Igor; Broor, Shobha; Simões, Eric A F; Campbell, Harry
2011-12-03
The global burden of disease attributable to seasonal influenza virus in children is unknown. We aimed to estimate the global incidence of and mortality from lower respiratory infections associated with influenza in children younger than 5 years. We estimated the incidence of influenza episodes, influenza-associated acute lower respiratory infections (ALRI), and influenza-associated severe ALRI in children younger than 5 years, stratified by age, with data from a systematic review of studies published between Jan 1, 1995, and Oct 31, 2010, and 16 unpublished population-based studies. We applied these incidence estimates to global population estimates for 2008 to calculate estimates for that year. We estimated possible bounds for influenza-associated ALRI mortality by combining incidence estimates with case fatality ratios from hospital-based reports and identifying studies with population-based data for influenza seasonality and monthly ALRI mortality. We identified 43 suitable studies, with data for around 8 million children. We estimated that, in 2008, 90 million (95% CI 49-162 million) new cases of influenza (data from nine studies), 20 million (13-32 million) cases of influenza-associated ALRI (13% of all cases of paediatric ALRI; data from six studies), and 1 million (1-2 million) cases of influenza-associated severe ALRI (7% of cases of all severe paediatric ALRI; data from 39 studies) occurred worldwide in children younger than 5 years. We estimated there were 28,000-111,500 deaths in children younger than 5 years attributable to influenza-associated ALRI in 2008, with 99% of these deaths occurring in developing countries. Incidence and mortality varied substantially from year to year in any one setting. Influenza is a common pathogen identified in children with ALRI and results in a substantial burden on health services worldwide. Sufficient data to precisely estimate the role of influenza in childhood mortality from ALRI are not available. WHO; Bill & Melinda Gates Foundation. Copyright © 2011 Elsevier Ltd. All rights reserved.
The Cost of Penicillin Allergy Evaluation.
Blumenthal, Kimberly G; Li, Yu; Banerji, Aleena; Yun, Brian J; Long, Aidan A; Walensky, Rochelle P
2017-09-22
Unverified penicillin allergy leads to adverse downstream clinical and economic sequelae. Penicillin allergy evaluation can be used to identify true, IgE-mediated allergy. To estimate the cost of penicillin allergy evaluation using time-driven activity-based costing (TDABC). We implemented TDABC throughout the care pathway for 30 outpatients presenting for penicillin allergy evaluation. The base-case evaluation included penicillin skin testing and a 1-step amoxicillin drug challenge, performed by an allergist. We varied assumptions about the provider type, clinical setting, procedure type, and personnel timing. The base-case penicillin allergy evaluation costs $220 in 2016 US dollars: $98 for personnel, $119 for consumables, and $3 for space. In sensitivity analyses, lower cost estimates were achieved when only a drug challenge was performed (ie, no skin test, $84) and a nurse practitioner provider was used ($170). Adjusting for the probability of anaphylaxis did not result in a changed estimate ($220); although other analyses led to modest changes in the TDABC estimate ($214-$246), higher estimates were identified with changing to a low-demand practice setting ($268), a 50% increase in personnel times ($269), and including clinician documentation time ($288). In a least/most costly scenario analyses, the lowest TDABC estimate was $40 and the highest was $537. Using TDABC, penicillin allergy evaluation costs $220; even with varied assumptions adjusting for operational challenges, clinical setting, and expanded testing, penicillin allergy evaluation still costs only about $540. This modest investment may be offset for patients treated with costly alternative antibiotics that also may result in adverse consequences. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Jung, Sungwoon; Kim, Jounghwa; Kim, Jeongsoo; Hong, Dahee; Park, Dongjoo
2017-04-01
The objective of this study is to estimate the vehicle kilometer traveled (VKT) and on-road emissions using the traffic volume in urban. We estimated two VKT; one is based on registered vehicles and the other is based on traffic volumes. VKT for registered vehicles was 2.11 times greater than that of the applied traffic volumes because each VKT estimation method is different. Therefore, we had to define the inner VKT is moved VKT inner in urban to compare two values. Also, we focused on freight modes because these are discharged much air pollutant emissions. From analysis results, we found middle and large trucks registered in other regions traveled to target city in order to carry freight, target city has included many industrial and logistics areas. Freight is transferred through the harbors, large logistics centers, or via locations before being moved to the final destination. During this process, most freight is moved by middle and large trucks, and trailers rather than small trucks for freight import and export. Therefore, these trucks from other areas are inflow more than registered vehicles. Most emissions from diesel trucks had been overestimated in comparison to VKT from applied traffic volumes in target city. From these findings, VKT is essential based on traffic volume and travel speed on road links in order to estimate accurately the emissions of diesel trucks in target city. Our findings support the estimation of the effect of on-road emissions on urban air quality in Korea. Copyright © 2016. Published by Elsevier B.V.
Ford, Michael J; Murdoch, Andrew; Hughes, Michael
2015-03-01
We used parentage analysis based on microsatellite genotypes to measure rates of homing and straying of Chinook salmon (Oncorhynchus tshawytscha) among five major spawning tributaries within the Wenatchee River, Washington. On the basis of analysis of 2248 natural-origin and 11594 hatchery-origin fish, we estimated that the rate of homing to natal tributaries by natural-origin fish ranged from 0% to 99% depending on the tributary. Hatchery-origin fish released in one of the five tributaries homed to that tributary at a far lower rate than the natural-origin fish (71% compared to 96%). For hatchery-released fish, stray rates based on parentage analysis were consistent with rates estimated using physical tag recoveries. Stray rates among major spawning tributaries were generally higher than stray rates of tagged fish to areas outside of the Wenatchee River watershed. Within the Wenatchee watershed, rates of straying by natural-origin fish were significantly affected by spawning tributary and by parental origin: progeny of naturally spawning hatchery-produced fish strayed at significantly higher rates than progeny whose parents were themselves of natural origin. Notably, none of the 170 offspring that were products of mating by two natural-origin fish strayed from their natal tributary. Indirect estimates of gene flow based on FST statistics were correlated with but higher than the estimates from the parentage data. Tributary-specific estimates of effective population size were also correlated with the number of spawners in each tributary. Published [2015]. This article is a U.S. Government work and is in the public domain in the USA.
Cost-effectiveness of different strategies to manage patients with sciatica.
Fitzsimmons, Deborah; Phillips, Ceri J; Bennett, Hayley; Jones, Mari; Williams, Nefyn; Lewis, Ruth; Sutton, Alex; Matar, Hosam E; Din, Nafees; Burton, Kim; Nafees, Sadia; Hendry, Maggie; Rickard, Ian; Wilkinson, Claire
2014-07-01
The aim of this paper is to estimate the relative cost-effectiveness of treatment regimens for managing patients with sciatica. A deterministic model structure was constructed based on information from the findings from a systematic review of clinical effectiveness and cost-effectiveness, published sources of unit costs, and expert opinion. The assumption was that patients presenting with sciatica would be managed through one of 3 pathways (primary care, stepped approach, immediate referral to surgery). Results were expressed as incremental cost per patient with symptoms successfully resolved. Analysis also included incremental cost per utility gained over a 12-month period. One-way sensitivity analyses were used to address uncertainty. The model demonstrated that none of the strategies resulted in 100% success. For initial treatments, the most successful regime in the first pathway was nonopioids, with a probability of success of 0.613. In the second pathway, the most successful strategy was nonopioids, followed by biological agents, followed by epidural/nerve block and disk surgery, with a probability of success of 0.996. Pathway 3 (immediate surgery) was not cost-effective. Sensitivity analyses identified that the use of the highest cost estimates results in a similar overall picture. While the estimates of cost per quality-adjusted life year are higher, the economic model demonstrated that stepped approaches based on initial treatment with nonopioids are likely to represent the most cost-effective regimens for the treatment of sciatica. However, development of alternative economic modelling approaches is required. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Huang, Jidong; Zheng, Rong; Chaloupka, Frank J; Fong, Geoffrey T; Jiang, Yuan
2015-07-01
Few studies have examined the impact of tobacco tax and price policies in China. In addition, very little is known about the differential responses to tax and price increases based on socioeconomic status in China. To estimate the conditional cigarette consumption price elasticity among adult urban smokers in China and to examine the differential responses to cigarette price increases among groups with different income and/or educational levels. Multivariate analyses employing the general estimating equations method were conducted using the first three waves of the International Tobacco Control (ITC) China Survey. Analyses based on subsample by education and income were conducted. Conditional cigarette demand price elasticity ranges from -0.12 to -0.14. No differential responses to cigarette price increase were found across education levels. The price elasticity estimates do not differ between high-income smokers and medium-income smokers. Cigarette consumption among low-income smokers did not decrease after a price increase, at least among those who continued to smoke. Relative to other low-income and middle-income countries, cigarette consumption among Chinese adult smokers is not very sensitive to changes in cigarette prices. The total impact of cigarette price increase would be larger if its impact on smoking initiation and cessation, as well as the price-reducing behaviours such as brand switching and trading down, were taken into account. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Fong, Youyi; Halloran, M Elizabeth; Park, Jin Kyung; Marks, Florian; Clemens, John D; Chao, Dennis L
2018-02-20
Oral cholera vaccine (OCV) is a feasible tool to prevent or mitigate cholera outbreaks. A better understanding of the vaccine's efficacy among different age groups and how rapidly its protection wanes could help guide vaccination policy. To estimate the level and duration of OCV efficacy, we re-analyzed data from a previously published cluster-randomized, double-blind, placebo controlled trial with five years of follow-up. We used a Cox proportional hazards model and modeled the potentially time-dependent effect of age categories on both vaccine efficacy and risk of infection in the placebo group. In addition, we investigated the impact of an outbreak period on model estimation. Vaccine efficacy was 38% (95% CI: -2%,62%) for those vaccinated from ages 1 to under 5 years old, 85% (95% CI: 67%,93%) for those 5 to under 15 years, and 69% (95% CI: 49%,81%) for those vaccinated at ages 15 years and older. Among adult vaccinees, efficacy did not appear to wane during the trial, but there was insufficient data to assess the waning of efficacy among child vaccinees. Through this re-analysis we were able to detect a statistically significant difference in OCV efficacy when the vaccine was administered to children under 5 years old vs. children 5 years and older. The estimated efficacies are more similar to the previously published analysis based on the first two years of follow-up than the analysis based on all five years. ClinicalTrials.gov identifier NCT00289224.
Growth of saprotrophic fungi and bacteria in soil.
Rousk, Johannes; Bååth, Erland
2011-10-01
Bacterial and fungal growth rate measurements are sensitive variables to detect changes in environmental conditions. However, while considerable progress has been made in methods to assess the species composition and biomass of fungi and bacteria, information about growth rates remains surprisingly rudimentary. We review the recent history of approaches to assess bacterial and fungal growth rates, leading up to current methods, especially focusing on leucine/thymidine incorporation to estimate bacterial growth and acetate incorporation into ergosterol to estimate fungal growth. We present the underlying assumptions for these methods, compare estimates of turnover times for fungi and bacteria based on them, and discuss issues, including for example elusive conversion factors. We review what the application of fungal and bacterial growth rate methods has revealed regarding the influence of the environmental factors of temperature, moisture (including drying/rewetting), pH, as well as the influence of substrate additions, the presence of plants and toxins. We highlight experiments exploring the competitive and facilitative interaction between bacteria and fungi enabled using growth rate methods. Finally, we predict that growth methods will be an important complement to molecular approaches to elucidate fungal and bacterial ecology, and we identify methodological concerns and how they should be addressed. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
A cost-effectiveness evaluation of hospital discharge counseling by pharmacists.
Chinthammit, Chanadda; Armstrong, Edward P; Warholak, Terri L
2012-04-01
This study estimated the cost-effectiveness of pharmacist discharge counseling on medication-related morbidity in both the high-risk elderly and general US population. A cost-effectiveness decision analytic model was developed using a health care system perspective based on published clinical trials. Costs included direct medical costs, and the effectiveness unit was patients discharged without suffering a subsequent adverse drug event. A systematic review of published studies was conducted to estimate variable probabilities in the cost-effectiveness model. To test the robustness of the results, a second-order probabilistic sensitivity analysis (Monte Carlo simulation) was used to run 10 000 cases through the model sampling across all distributions simultaneously. Pharmacist counseling at hospital discharge provided a small, but statistically significant, clinical improvement at a similar overall cost. Pharmacist counseling was cost saving in approximately 48% of scenarios and in the remaining scenarios had a low willingness-to-pay threshold for all scenarios being cost-effective. In addition, discharge counseling was more cost-effective in the high-risk elderly population compared to the general population. This cost-effectiveness analysis suggests that discharge counseling by pharmacists is quite cost-effective and estimated to be cost saving in over 48% of cases. High-risk elderly patients appear to especially benefit from these pharmacist services.
Cost-benefit analysis simulation of a hospital-based violence intervention program.
Purtle, Jonathan; Rich, Linda J; Bloom, Sandra L; Rich, John A; Corbin, Theodore J
2015-02-01
Violent injury is a major cause of disability, premature mortality, and health disparities worldwide. Hospital-based violence intervention programs (HVIPs) show promise in preventing violent injury. Little is known, however, about how the impact of HVIPs may translate into monetary figures. To conduct a cost-benefit analysis simulation to estimate the savings an HVIP might produce in healthcare, criminal justice, and lost productivity costs over 5 years in a hypothetical population of 180 violently injured patients, 90 of whom received HVIP intervention and 90 of whom did not. Primary data from 2012, analyzed in 2013, on annual HVIP costs/number of clients served and secondary data sources were used to estimate the cost, number, and type of violent reinjury incidents (fatal/nonfatal, resulting in hospitalization/not resulting in hospitalization) and violent perpetration incidents (aggravated assault/homicide) that this population might experience over 5 years. Four different models were constructed and three different estimates of HVIP effect size (20%, 25%, and 30%) were used to calculate a range of estimates for HVIP net savings and cost-benefit ratios from different payer perspectives. All benefits were discounted at 5% to adjust for their net present value. Estimates of HVIP cost savings at the base effect estimate of 25% ranged from $82,765 (narrowest model) to $4,055,873 (broadest model). HVIPs are likely to produce cost savings. This study provides a systematic framework for the economic evaluation of HVIPs and estimates of HVIP cost savings and cost-benefit ratios that may be useful in informing public policy decisions. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Estimation of Energy Expenditure for Wheelchair Users Using a Physical Activity Monitoring System.
Hiremath, Shivayogi V; Intille, Stephen S; Kelleher, Annmarie; Cooper, Rory A; Ding, Dan
2016-07-01
To develop and evaluate energy expenditure (EE) estimation models for a physical activity monitoring system (PAMS) in manual wheelchair users with spinal cord injury (SCI). Cross-sectional study. University-based laboratory environment, a semistructured environment at the National Veterans Wheelchair Games, and the participants' home environments. Volunteer sample of manual wheelchair users with SCI (N=45). Participants were asked to perform 10 physical activities (PAs) of various intensities from a list. The PAMS consists of a gyroscope-based wheel rotation monitor (G-WRM) and an accelerometer device worn on the upper arm or on the wrist. Criterion EE using a portable metabolic cart and raw sensor data from PAMS were collected during each of these activities. Estimated EE using custom models for manual wheelchair users based on either the G-WRM and arm accelerometer (PAMS-Arm) or the G-WRM and wrist accelerometer (PAMS-Wrist). EE estimation performance for the PAMS-Arm (average error ± SD: -9.82%±37.03%) and PAMS-Wrist (-5.65%±32.61%) on the validation dataset indicated that both PAMS-Arm and PAMS-Wrist were able to estimate EE for a range of PAs with <10% error. Moderate to high intraclass correlation coefficients (ICCs) indicated that the EE estimated by PAMS-Arm (ICC3,1=.82, P<.05) and PAMS-Wrist (ICC3,1=.89, P<.05) are consistent with the criterion EE. Availability of PA monitors can assist wheelchair users to track PA levels, leading toward a healthier lifestyle. The new models we developed can estimate PA levels in manual wheelchair users with SCI in laboratory and community settings. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Balentine, Courtney J; Vanness, David J; Schneider, David F
2018-01-01
We evaluated whether diagnostic thyroidectomy for indeterminate thyroid nodules would be more cost-effective than genetic testing after including the costs of long-term surveillance. We used a Markov decision model to estimate the cost-effectiveness of thyroid lobectomy versus genetic testing (Afirma®) for evaluation of indeterminate (Bethesda 3-4) thyroid nodules. The base case was a 40-year-old woman with a 1-cm indeterminate nodule. Probabilities and estimates of utilities were obtained from the literature. Cost estimates were based on Medicare reimbursements with a 3% discount rate for costs and quality-adjusted life-years. During a 5-year period after the diagnosis of indeterminate thyroid nodules, lobectomy was less costly and more effective than Afirma® (lobectomy: $6,100; 4.50 quality-adjusted life- years vs Afirma®: $9,400; 4.47 quality-adjusted life-years). Only in 253 of 10,000 simulations (2.5%) did Afirma® show a net benefit at a cost-effectiveness threshold of $100,000 per quality- adjusted life-years. There was only a 0.3% probability of Afirma® being cost saving and a 14.9% probability of improving quality-adjusted life-years. Our base case estimate suggests that diagnostic lobectomy dominates genetic testing as a strategy for ruling out malignancy of indeterminate thyroid nodules. These results, however, were highly sensitive to estimates of utilities after lobectomy and living under surveillance after Afirma®. Published by Elsevier Inc.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING ...
In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainties in the numerical estimates. In 2006, the National Research Council of the National Academy of Sciences released a report on the health risks from exposure to low levels of ionizing radiation. Cosponsored by the EPA and several other Federal agencies, Health Risks from Exposure to Low Levels of Ionizing Radiation BEIR VII Phase 2 (BEIR VII) primarily addresses cancer and genetic risks from low doses of low-LET radiation. In the draft White Paper: Modifying EPA Radiation Risk Models Based on BEIR VII (White Paper), ORIA proposed changes in EPA’s methodology for estimating radiogenic cancers, based on the contents of BEIR VII and some ancillary information. For the most part, it proposed to adopt the models and methodology recommended in BEIR VII; however, certain modifications and expansions are considered to be desirable or necessary for EPA’s purposes. EPA sought advice from the Agency’s Science Advisory Board on the application of BEIR VII and on issues relating to these modifications and expansions in the Advisory on EPA’s Draft White Paper: Modifying EPA Radiation Risk Models Based on BEIR VII (record # 83044). The SAB issued its Advisory on Jan. 31, 2008 (EPA-SAB-08-
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Estimating the burden of foodborne diseases in Japan
Kumagai, Yuko; Gilmour, Stuart; Ota, Erika; Momose, Yoshika; Onishi, Toshiro; Bilano, Ver Luanni Feliciano; Kasuga, Fumiko; Sekizaki, Tsutomu
2015-01-01
Abstract Objective To assess the burden posed by foodborne diseases in Japan using methods developed by the World Health Organization’s Foodborne Disease Burden Epidemiology Reference Group (FERG). Methods Expert consultation and statistics on food poisoning during 2011 were used to identify three common causes of foodborne disease in Japan: Campylobacter and Salmonella species and enterohaemorrhagic Escherichia coli (EHEC). We conducted systematic reviews of English and Japanese literature on the complications caused by these pathogens, by searching Embase, the Japan medical society abstract database and Medline. We estimated the annual incidence of acute gastroenteritis from reported surveillance data, based on estimated probabilities that an affected person would visit a physician and have gastroenteritis confirmed. We then calculated disability-adjusted life-years (DALYs) lost in 2011, using the incidence estimates along with disability weights derived from published studies. Findings In 2011, foodborne disease caused by Campylobacter species, Salmonella species and EHEC led to an estimated loss of 6099, 3145 and 463 DALYs in Japan, respectively. These estimated burdens are based on the pyramid reconstruction method; are largely due to morbidity rather than mortality; and are much higher than those indicated by routine surveillance data. Conclusion Routine surveillance data may indicate foodborne disease burdens that are much lower than the true values. Most of the burden posed by foodborne disease in Japan comes from secondary complications. The tools developed by FERG appear useful in estimating disease burdens and setting priorities in the field of food safety. PMID:26478611
Total Land Water Storage Change over 2003 - 2013 Estimated from a Global Mass Budget Approach
NASA Technical Reports Server (NTRS)
Dieng, H. B.; Champollion, N.; Cazenave, A.; Wada, Y.; Schrama, E.; Meyssignac, B.
2015-01-01
We estimate the total land water storage (LWS) change between 2003 and 2013 using a global water mass budget approach. Hereby we compare the ocean mass change (estimated from GRACE space gravimetry on the one hand, and from the satellite altimetry-based global mean sea level corrected for steric effects on the other hand) to the sum of the main water mass components of the climate system: glaciers, Greenland and Antarctica ice sheets, atmospheric water and LWS (the latter being the unknown quantity to be estimated). For glaciers and ice sheets, we use published estimates of ice mass trends based on various types of observations covering different time spans between 2003 and 2013. From the mass budget equation, we derive a net LWS trend over the study period. The mean trend amounts to +0.30 +/- 0.18 mm/yr in sea level equivalent. This corresponds to a net decrease of -108 +/- 64 cu km/yr in LWS over the 2003-2013 decade. We also estimate the rate of change in LWS and find no significant acceleration over the study period. The computed mean global LWS trend over the study period is shown to be explained mainly by direct anthropogenic effects on land hydrology, i.e. the net effect of groundwater depletion and impoundment of water in man-made reservoirs, and to a lesser extent the effect of naturally-forced land hydrology variability. Our results compare well with independent estimates of human-induced changes in global land hydrology.
Wong, Karen; Delaney, Geoff P; Barton, Michael B
2015-08-01
There is variation in radiotherapy fractionation practice, however, there is no evidence-based benchmark for appropriate activity. An evidence-based model was constructed to estimate the optimal number of fractions for the first course of radiotherapy for breast cancer to aid in services planning and performance benchmarking. The published breast cancer radiotherapy utilisation model was adapted. Evidence-based number of fractions was added to each radiotherapy indication. The overall optimal number of fractions was calculated based on the frequency of specific clinical conditions where radiotherapy is indicated and the recommended number of fractions for each condition. Sensitivity analysis was performed to assess the impact of uncertainties on the model. For the entire Australian breast cancer patient population, the estimated optimal number of fractions per patient was 16.8, 14.6, 13.7 and 0.8 for ductal carcinoma in situ, early, advanced and metastatic breast cancer respectively. Overall, the optimal number of fractions per patient was 14.4 (range 14.4-18.7). These results allow comparison with actual practices, and workload prediction to aid in services planning. The model can be easily adapted to other countries by inserting population-specific epidemiological data, and to future changes in cancer incidence, stage distribution and fractionation recommendations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Estimation of hydrolysis rate constants for carbamates ...
Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp
Gronberg, JoAnn M.; Arnold, Terri L.
2017-03-24
County-level estimates of nitrogen and phosphorus inputs from animal manure for the conterminous United States were calculated from animal population inventories in the 2007 and 2012 Census of Agriculture, using previously published methods. These estimates of non-point nitrogen and phosphorus inputs from animal manure were compiled in support of the U.S. Geological Survey’s National Water-Quality Assessment Project of the National Water Quality Program and are needed to support national-scale investigations of stream and groundwater water quality. The estimates published in this report are comparable with older estimates which can be compared to show changes in nitrogen and phosphorus inputs from manure over time.
NASA Astrophysics Data System (ADS)
Kuchynka, Petr; Folkner, William M.; Konopliv, Alex S.; Parker, Timothy J.; Park, Ryan S.; Le Maistre, Sebastien; Dehant, Veronique
2014-02-01
The Opportunity Mars Exploration Rover remained stationary between January and May 2012 in order to conserve solar energy for running its survival heaters during martian winter. While stationary, extra Doppler tracking was performed in order to allow an improved estimate of the martian precession rate. In this study, we determine Mars rotation by combining the new Opportunity tracking data with historic tracking data from the Viking and Pathfinder landers and tracking data from Mars orbiters (Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter). The estimated rotation parameters are stable in cross-validation tests and compare well with previously published values. In particular, the Mars precession rate is estimated to be -7606.1 ± 3.5 mas/yr. A representation of Mars rotation as a series expansion based on the determined rotation parameters is provided.
Sieve estimation in a Markov illness-death process under dual censoring.
Boruvka, Audrey; Cook, Richard J
2016-04-01
Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Wilmoth, Daniel R
2015-12-01
The prescription drug user fee program provides additional resources to the U.S. Food and Drug Administration at the expense of regulated firms. Those resources accelerate the review of new drugs. Faster approvals allow firms to realize profits sooner, and the program is supported politically by industry. However, published estimates of the value to firms of reduced regulatory delay vary dramatically. It is shown here that this variation is driven largely by differences in methods that correspond to differences in implicit assumptions about the effects of reduced delay. Theoretical modeling is used to derive an equation describing the relationship between estimates generated using different methods. The method likely to yield the most accurate results is identified. A reconciliation of published estimates yields a value to a firm for a one-year reduction in regulatory delay at the time of approval of about $60 million for a typical drug. Published 2015. This article is a U.S. Government work and is in the public domain in the U.S.A. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
State energy price and expenditure report 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-06-01
The State Energy Price and Expenditure Report (SEPER) presents energy price and expenditure estimates individually for the 50 States and the District of Columbia and in aggregate for the United States. The price and expenditure estimates developed in the State Energy Price and Expenditure Data System (SEPEDS) are provided by energy source and economic sector and are published for the years 1970 through 1994. Consumption estimates used to calculate expenditures and the documentation for those estimates are taken from the State Energy Data Report 1994, Consumption Estimates (SEDR), published in October 1996. Expenditures are calculated by multiplying the price estimatesmore » by the consumption estimates, which are adjusted to remove process fuel; intermediate petroleum products; and other consumption that has no direct fuel costs, i.e., hydroelectric, geothermal, wind, solar, and photovoltaic energy sources. Documentation is included describing the development of price estimates, data sources, and calculation methods. 316 tabs.« less
Machtans, Craig S.; Thogmartin, Wayne E.
2014-01-01
The publication of a U.S. estimate of bird–window collisions by Loss et al. is an example of the somewhat contentious approach of using extrapolations to obtain large-scale estimates from small-scale studies. We review the approach by Loss et al. and other authors who have published papers on human-induced avian mortality and describe the drawbacks and advantages to publishing what could be considered imperfect science. The main drawback is the inherent and somewhat unquantifiable bias of using small-scale studies to scale up to a national estimate. The direct benefits include development of new methodologies for creating the estimates, an explicit treatment of known biases with acknowledged uncertainty in the final estimate, and the novel results. Other overarching benefits are that these types of papers are catalysts for improving all aspects of the science of estimates and for policies that must respond to the new information.
Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.
Pink, Joshua; Parker, Ben; Petrou, Stavros
2016-09-01
A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.
NASA Astrophysics Data System (ADS)
Helge Østerås, Bjørn; Skaane, Per; Gullien, Randi; Catrine Trægde Martinsen, Anne
2018-02-01
The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra™). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra™. AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.
O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.
2016-01-01
The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of objectively estimating mass and body segment properties of extinct species using whole articulated skeletons. PMID:26788418
Østerås, Bjørn Helge; Skaane, Per; Gullien, Randi; Martinsen, Anne Catrine Trægde
2018-01-25
The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra ™ ). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra ™ . AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.
Millerón, M; López de Heredia, U; Lorenzo, Z; Alonso, J; Dounavi, A; Gil, L; Nanos, N
2013-03-01
Spatial discordance between primary and effective dispersal in plant populations indicates that postdispersal processes erase the seed rain signal in recruitment patterns. Five different models were used to test the spatial concordance of the primary and effective dispersal patterns in a European beech (Fagus sylvatica) population from central Spain. An ecological method was based on classical inverse modelling (SSS), using the number of seed/seedlings as input data. Genetic models were based on direct kernel fitting of mother-to-offspring distances estimated by a parentage analysis or were spatially explicit models based on the genotype frequencies of offspring (competing sources model and Moran-Clark's Model). A fully integrated mixed model was based on inverse modelling, but used the number of genotypes as input data (gene shadow model). The potential sources of error and limitations of each seed dispersal estimation method are discussed. The mean dispersal distances for seeds and saplings estimated with these five methods were higher than those obtained by previous estimations for European beech forests. All the methods show strong discordance between primary and effective dispersal kernel parameters, and for dispersal directionality. While seed rain was released mostly under the canopy, saplings were established far from mother trees. This discordant pattern may be the result of the action of secondary dispersal by animals or density-dependent effects; that is, the Janzen-Connell effect. © 2013 Blackwell Publishing Ltd.
[Cost-effectiveness analysis and diet quality index applied to the WHO Global Strategy].
Machado, Flávia Mori Sarti; Simões, Arlete Naresse
2008-02-01
To test the use of cost-effectiveness analysis as a decision making tool in the production of meals for the inclusion of the recommendations published in the World Health Organization's Global Strategy. Five alternative options for breakfast menu were assessed previously to their adoption in a food service at a university in the state of Sao Paulo, Southeastern Brazil, in 2006. Costs of the different options were based on market prices of food items (direct cost). Health benefits were estimated based on adaptation of the Diet Quality Index (DQI). Cost-effectiveness ratios were estimated by dividing benefits by costs and incremental cost-effectiveness ratios were estimated as cost differential per unit of additional benefit. The meal choice was based on health benefit units associated to direct production cost as well as incremental effectiveness per unit of differential cost. The analysis showed the most simple option with the addition of a fruit (DQI = 64 / cost = R$ 1.58) as the best alternative. Higher effectiveness was seen in the options with a fruit portion (DQI1=64 / DQI3=58 / DQI5=72) compared to the others (DQI2=48 / DQI4=58). The estimate of cost-effectiveness ratio allowed to identifying the best breakfast option based on cost-effectiveness analysis and Diet Quality Index. These instruments allow easy application easiness and objective evaluation which are key to the process of inclusion of public or private institutions under the Global Strategy directives.
Torgerson, Paul R; Devleesschauwer, Brecht; Praet, Nicolas; Speybroeck, Niko; Willingham, Arve Lee; Kasuga, Fumiko; Rokni, Mohammad B; Zhou, Xiao-Nong; Fèvre, Eric M; Sripa, Banchob; Gargouri, Neyla; Fürst, Thomas; Budke, Christine M; Carabin, Hélène; Kirk, Martyn D; Angulo, Frederick J; Havelaar, Arie; de Silva, Nilanthi
2015-12-01
Foodborne diseases are globally important, resulting in considerable morbidity and mortality. Parasitic diseases often result in high burdens of disease in low and middle income countries and are frequently transmitted to humans via contaminated food. This study presents the first estimates of the global and regional human disease burden of 10 helminth diseases and toxoplasmosis that may be attributed to contaminated food. Data were abstracted from 16 systematic reviews or similar studies published between 2010 and 2015; from 5 disease data bases accessed in 2015; and from 79 reports, 73 of which have been published since 2000, 4 published between 1995 and 2000 and 2 published in 1986 and 1981. These included reports from national surveillance systems, journal articles, and national estimates of foodborne diseases. These data were used to estimate the number of infections, sequelae, deaths, and Disability Adjusted Life Years (DALYs), by age and region for 2010. These parasitic diseases, resulted in 48.4 million cases (95% Uncertainty intervals [UI] of 43.4-79.0 million) and 59,724 (95% UI 48,017-83,616) deaths annually resulting in 8.78 million (95% UI 7.62-12.51 million) DALYs. We estimated that 48% (95% UI 38%-56%) of cases of these parasitic diseases were foodborne, resulting in 76% (95% UI 65%-81%) of the DALYs attributable to these diseases. Overall, foodborne parasitic disease, excluding enteric protozoa, caused an estimated 23.2 million (95% UI 18.2-38.1 million) cases and 45,927 (95% UI 34,763-59,933) deaths annually resulting in an estimated 6.64 million (95% UI 5.61-8.41 million) DALYs. Foodborne Ascaris infection (12.3 million cases, 95% UI 8.29-22.0 million) and foodborne toxoplasmosis (10.3 million cases, 95% UI 7.40-14.9 million) were the most common foodborne parasitic diseases. Human cysticercosis with 2.78 million DALYs (95% UI 2.14-3.61 million), foodborne trematodosis with 2.02 million DALYs (95% UI 1.65-2.48 million) and foodborne toxoplasmosis with 825,000 DALYs (95% UI 561,000-1.26 million) resulted in the highest burdens in terms of DALYs, mainly due to years lived with disability. Foodborne enteric protozoa, reported elsewhere, resulted in an additional 67.2 million illnesses or 492,000 DALYs. Major limitations of our study include often substantial data gaps that had to be filled by imputation and suffer from the uncertainties that surround such models. Due to resource limitations it was also not possible to consider all potentially foodborne parasites (for example Trypanosoma cruzi). Parasites are frequently transmitted to humans through contaminated food. These estimates represent an important step forward in understanding the impact of foodborne diseases globally and regionally. The disease burden due to most foodborne parasites is highly focal and results in significant morbidity and mortality among vulnerable populations.
Estimating ice-affected streamflow by extended Kalman filtering
Holtschlag, D.J.; Grewal, M.S.
1998-01-01
An extended Kalman filter was developed to automate the real-time estimation of ice-affected streamflow on the basis of routine measurements of stream stage and air temperature and on the relation between stage and streamflow during open-water (ice-free) conditions. The filter accommodates three dynamic modes of ice effects: sudden formation/ablation, stable ice conditions, and eventual elimination. The utility of the filter was evaluated by applying it to historical data from two long-term streamflow-gauging stations, St. John River at Dickey, Maine and Platte River at North Bend, Nebr. Results indicate that the filter was stable and that parameters converged for both stations, producing streamflow estimates that are highly correlated with published values. For the Maine station, logarithms of estimated streamflows are within 8% of the logarithms of published values 87.2% of the time during periods of ice effects and within 15% 96.6% of the time. Similarly, for the Nebraska station, logarithms of estimated streamflows are within 8% of the logarithms of published values 90.7% of the time and within 15% 97.7% of the time. In addition, the correlation between temporal updates and published streamflows on days of direct measurements at the Maine station was 0.777 and 0.998 for ice-affected and open-water periods, respectively; for the Nebraska station, corresponding correlations were 0.864 and 0.997.
Assessing the Impact of Maneuver Training on NPS Pollution and Water Quality
2008-12-01
erosion. The Universal Soil Loss Equation ( USLE ), published in ARS Special Report 22-66 (1961), was based upon six contributing factors: A = R...the publication of USDA Agricultural Handbook 282 (Wischmeier and Smith 1965), the USLE has become the most widely used soil erosion model, and...Batholic 2001, Jones et al., 1996). 14 In 1987, the USLE was revised to improve the soil loss estimation by incorporating additional research and
The Paradox of German Foreign and Security Policy: With Respect to National Energy Security
2009-05-21
analysis from an historical perspective and first published post- materialist theory. It builds on Maslow ‟s hierarchy of needs and seeks to explain how...and maintaining the data needed , and completing and reviewing this collection of information. Send comments regarding this burden estimate or any...policy. Being strategically prepared when market forces fail to balance contradictory interests becomes a necessity for many countries. Based on Germany
2017-12-30
from the Defense Technical Information Center (DTIC) (http://www.dtic.mil). AFRL-RV-PS-TR-2018-0008 HAS BEEN REVIEWED AND IS APPROVED FOR...report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s approval... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Aquatic concentrations of chemical analytes compared to ...
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc
Estimates of air emissions from asphalt storage tanks and truck loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trumbore, D.C.
1999-12-31
Title V of the 1990 Clean Air Act requires the accurate estimation of emissions from all US manufacturing processes, and places the burden of proof for that estimate on the process owner. This paper is published as a tool to assist in the estimation of air emission from hot asphalt storage tanks and asphalt truck loading operations. Data are presented on asphalt vapor pressure, vapor molecular weight, and the emission split between volatile organic compounds and particulate emissions that can be used with AP-42 calculation techniques to estimate air emissions from asphalt storage tanks and truck loading operations. Since currentmore » AP-42 techniques are not valid in asphalt tanks with active fume removal, a different technique for estimation of air emissions in those tanks, based on direct measurement of vapor space combustible gas content, is proposed. Likewise, since AP-42 does not address carbon monoxide or hydrogen sulfide emissions that are known to be present in asphalt operations, this paper proposes techniques for estimation of those emissions. Finally, data are presented on the effectiveness of fiber bed filters in reducing air emissions in asphalt operations.« less
Current Term Enrollment Estimates: Spring 2014
ERIC Educational Resources Information Center
National Student Clearinghouse, 2014
2014-01-01
Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center, include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state, and…
Current Term Enrollment Estimates: Fall 2014
ERIC Educational Resources Information Center
National Student Clearinghouse, 2014
2014-01-01
Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center (NSCRC), include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state,…
NASA Astrophysics Data System (ADS)
DeMets, C.; Merkuryev, S. A.
2015-12-01
We estimate Nubia-Somalia rotations at ~1-Myr intervals for the past 20 Myr from newly available, high-resolution reconstructions of the Southwest Indian Ridge and reconstructions of the Red Sea and Gulf of Aden. The former rotations are based on many more data, extend farther back in time, and have more temporal resolution than has previously been the case. Nubia-Somalia plate motion has remained remarkably steady since 5.2 Ma. For example, at the northern end of the East Africa rift, our Nubia-Somalia plate motion estimates at six different times between 0.78 Ma and 5.2 Ma agree to within 3% with the rift-normal component of motion that is extrapolated from the recently estimated Saria et al. (2014) GPS angular velocity. Over the past 10.6 Myr, the Nubia-Somalia rotations predict 42±4 km of rift-normal extension across the northern segment of the Main Ethiopian Rift. This agrees with approximate minimum and maximum estimates of 40 km and 53 km for post-10.6-Myr extension from seismological surveys of this narrow part of the plate boundary and is also close to 55-km and 48±3 km estimates from published and our own reconstructions of the Nubia-Arabia and Somalia-Arabia seafloorspreading histories for the Red Sea and Gulf of Aden. Our new rotations exclude at high confidence level two previously published estimates of Nubia-Somalia motion based on inversions of Chron 5n.2 along the Southwest Indian Ridge, which predict rift-normal extensions of 13±14 km and 129±16 km across the Main Ethiopian Rift since 11 Ma. Constraints on Nubia-Somalia motion before ~15 Ma are weaker due to sparse coverage of pre-15-Myr magnetic reversals along the Nubia-Antarctic plate boundary, but appear to require motion before 15 Ma. Nubia-Somalia rotations that we estimate from a probabilistic analysis of geometric and age constraints from the Red Sea and Gulf of Aden are consistent with those determined from Southwest Indian Ridge data, particularly for the past 11 Myr. Nubia-Somalia rotations determined from the Red Sea/Gulf of Aden rotations and Southwest Indian Ridge rotations independently predict that motion during its oldest phase was highly oblique to the rift and a factor-of-two or more faster than at present, although large uncertainties remain in the rotation estimates for times before ~15 Ma.
Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.
Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie
2018-05-01
Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Jacobs, Philip; Lier, Douglas; Gooch, Katherine; Buesch, Katharina; Lorimer, Michelle; Mitchell, Ian
2013-01-01
BACKGROUND: Approximately one in 10 hospitalized patients will acquire a nosocomial infection (NI) after admission to hospital, of which 71% are due to respiratory viruses, including the respiratory syncytial virus (RSV). NIs are concerning and lead to prolonged hospitalizations. The economics of NIs are typically described in generalized terms and specific cost data are lacking. OBJECTIVE: To develop an evidence-based model for predicting the risk and cost of nosocomial RSV infection in pediatric settings. METHODS: A model was developed, from a Canadian perspective, to capture all costs related to an RSV infection hospitalization, including the risk and cost of an NI, diagnostic testing and infection control. All data inputs were derived from published literature. Deterministic sensitivity analyses were performed to evaluate the uncertainty associated with the estimates and to explore the impact of changes to key variables. A probabilistic sensitivity analysis was performed to estimate a confidence interval for the overall cost estimate. RESULTS: The estimated cost of nosocomial RSV infection adds approximately 30.5% to the hospitalization costs for the treatment of community-acquired severe RSV infection. The net benefits of the prevention activities were estimated to be equivalent to 9% of the total RSV-related costs. Changes in the estimated hospital infection transmission rates did not have a significant impact on the base-case estimate. CONCLUSIONS: The risk and cost of nosocomial RSV infection contributes to the overall burden of RSV. The present model, which was developed to estimate this burden, can be adapted to other countries with different disease epidemiology, costs and hospital infection transmission rates. PMID:24421788
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
Adamski, Alys; Bertolli, Jeanne; Castañeda-Orjuela, Carlos; Devine, Owen J; Johansson, Michael A; Duarte, Maritza Adegnis Gonzalez; Farr, Sherry L; Tinker, Sarah C; Reyes, Marcela Maria Mercado; Tong, Van T; Garcia, Oscar Eduardo Pacheco; Valencia, Diana; Ortiz, Diego Alberto Cuellar; Honein, Margaret A; Jamieson, Denise J; Martínez, Martha Lucía Ospina; Gilboa, Suzanne M
2018-06-01
Colombia experienced a Zika virus (ZIKV) outbreak in 2015-2016. To assist with planning for medical and supportive services for infants affected by prenatal ZIKV infection, we used a model to estimate the number of pregnant women infected with ZIKV and the number of infants with congenital microcephaly from August 2015 to August 2017. We used nationally reported cases of symptomatic ZIKV disease among pregnant women and information from the literature on the percent of asymptomatic infections to estimate the number of pregnant women with ZIKV infection occurring August 2015-December 2016. We then estimated the number of infants with congenital microcephaly expected to occur August 2015-August 2017. To compare to the observed counts of infants with congenital microcephaly due to all causes reported through the national birth defects surveillance system, the model was time limited to produce estimates for February-November 2016. We estimated 1140-2160 (interquartile range [IQR]) infants with congenital microcephaly in Colombia, during August 2015-August 2017, whereas 340-540 infants with congenital microcephaly would be expected in the absence of ZIKV. Based on the time limited version of the model, for February-November 2016, we estimated 650-1410 infants with congenital microcephaly in Colombia. The 95% uncertainty interval for the latter estimate encompasses the 476 infants with congenital microcephaly reported during that approximate time frame based on national birth defects surveillance. Based on modeled estimates, ZIKV infection during pregnancy in Colombia could lead to 3-4 times as many infants with congenital microcephaly in 2015-2017 as would have been expected in the absence of the ZIKV outbreak. This publication was made possible through support provided by the Bureau for Global Health, U.S. Agency for International Development under the terms of an Interagency Agreement with Centers for Disease Control and Prevention. Published by Elsevier Ltd.
Dong, Xing; Zhang, Kevin; Ren, Yuan; Wilson, Reda; O'Neil, Mary Elizabeth
2016-01-01
Studying population-based cancer survival by leveraging the high-quality cancer incidence data collected by the Centers for Disease Control and Prevention's National Program of Cancer Registries (NPCR) can offer valuable insight into the cancer burden and impact in the United States. We describe the development and validation of a SASmacro tool that calculates population-based cancer site-specific relative survival estimates comparable to those obtained through SEER*Stat. The NPCR relative survival analysis SAS tool (NPCR SAS tool) was developed based on the relative survival method and SAS macros developed by Paul Dickman. NPCR cancer incidence data from 25 states submitted in November 2012 were used, specifically cases diagnosed from 2003 to 2010 with follow-up through 2010. Decennial and annual complete life tables published by the National Center for Health Statistics (NCHS) for 2000 through 2009 were used. To assess comparability between the 2 tools, 5-year relative survival rates were calculated for 25 cancer sites by sex, race, and age group using the NPCR SAS tool and the National Cancer Institute's SEER*Stat 8.1.5 software. A module to create data files for SEER*Stat was also developed for the NPCR SAS tool. Comparison of the results produced by both SAS and SEER*Stat showed comparable and reliable relative survival estimates for NPCR data. For a majority of the sites, the net differences between the NPCR SAS tool and SEER*Stat-produced relative survival estimates ranged from -0.1% to 0.1%. The estimated standard errors were highly comparable between the 2 tools as well. The NPCR SAS tool will allow researchers to accurately estimate cancer 5-year relative survival estimates that are comparable to those produced by SEER*Stat for NPCR data. Comparison of output from the NPCR SAS tool and SEER*Stat provided additional quality control capabilities for evaluating data prior to producing NPCR relative survival estimates.
Joiner, Kevin L; Nam, Soohyun; Whittemore, Robin
2017-07-01
The objective was to describe Diabetes Prevention Program (DPP)-based lifestyle interventions delivered via electronic, mobile, and certain types of telehealth (eHealth) and estimate the magnitude of the effect on weight loss. A systematic review was conducted. PubMed and EMBASE were searched for studies published between January 2003 and February 2016 that met inclusion and exclusion criteria. An overall estimate of the effect on mean percentage weight loss across all the interventions was initially conducted. A stratified meta-analysis was also conducted to determine estimates of the effect across the interventions classified according to whether behavioral support by counselors post-baseline was not provided, provided remotely with communication technology, or face-to-face. Twenty-two studies met the inclusion/exclusion criteria, in which 26 interventions were evaluated. Samples were primarily white and college educated. Interventions included Web-based applications, mobile phone applications, text messages, DVDs, interactive voice response telephone calls, telehealth video conferencing, and video on-demand programing. Nine interventions were stand-alone, delivered post-baseline exclusively via eHealth. Seventeen interventions included additional behavioral support provided by counselors post-baseline remotely with communication technology or face-to-face. The estimated overall effect on mean percentage weight loss from baseline to up to 15months of follow-up across all the interventions was -3.98%. The subtotal estimate across the stand-alone eHealth interventions (-3.34%) was less than the estimate across interventions with behavioral support given by a counselor remotely (-4.31%), and the estimate across interventions with behavioral support given by a counselor in-person (-4.65%). There is promising evidence of the efficacy of DPP-based eHealth interventions on weight loss. Further studies are needed particularly in racially and ethnically diverse populations with limited levels of educational attainment. Future research should also focus on ways to optimize behavioral support. Copyright © 2017 Elsevier Inc. All rights reserved.
Kim, Hyun Jung; Griffiths, Mansel W; Fazil, Aamir M; Lammerding, Anna M
2009-09-01
Foodborne illness contracted at food service operations is an important public health issue in Korea. In this study, the probabilities for growth of, and enterotoxin production by, Staphylococcus aureus in pork meat-based foods prepared in food service operations were estimated by the Monte Carlo simulation. Data on the prevalence and concentration of S. aureus as well as compliance to guidelines for time and temperature controls during food service operations were collected. The growth of S. aureus was initially estimated by using the U.S. Department of Agriculture's Pathogen Modeling Program. A second model based on raw pork meat was derived to compare cell number predictions. The correlation between toxin level and cell number as well as minimum toxin dose obtained from published data was adopted to quantify the probability of staphylococcal intoxication. When data gaps were found, assumptions were made based on guidelines for food service practices. Baseline risk model and scenario analyses were performed to indicate possible outcomes of staphylococcal intoxication under the scenarios generated based on these data gaps. Staphylococcal growth was predicted during holding before and after cooking, and the highest estimated concentration (4.59 log CFU/g for the 99.9th percentile value) of S. aureus was observed in raw pork initially contaminated with S. aureus and held before cooking. The estimated probability for staphylococcal intoxication was very low, using currently available data. However, scenario analyses revealed an increased possibility of staphylococcal intoxication when increased levels of initial contamination in the raw meat, andlonger holding time both before and after cooking the meat occurred.
Final STS-35 Columbia descent BET products and results for LaRC OEX investigations
NASA Technical Reports Server (NTRS)
Oakes, Kevin F.; Findlay, John T.; Jasinski, Rachel A.; Wood, James S.
1991-01-01
Final STS-35 'Columbia' descent Best Estimate Trajectory (BET) products have been developed for Langley Research Center (LaRC) Orbiter Experiments (OEX) investigations. Included are the reconstructed inertial trajectory profile; the Extended BET, which combines the inertial data and, in this instance, the National Weather Service atmospheric information obtained via Johnson Space Center; and the Aerodynamic BET. The inertial BET utilized Inertial Measurement Unit 1 (IMU1) dynamic measurements for deterministic propagation during the ENTREE estimation process. The final estimate was based on the considerable ground based C-band tracking coverage available as well as Tracking Data and Relay Satellite System (TDRSS) Doppler data, a unique use of the latter for endo-atmospheric flight determinations. The actual estimate required simultaneous solutions for the spacecraft position and velocity, spacecraft attitude, and six IMU parameters - three gyro biases and three accelerometer scale factor correction terms. The anchor epoch for this analysis was 19,200 Greenwich Mean Time (GMT) seconds which corresponds to an initial Shuttle altitude of approximately 513 kft. The atmospheric data incorporated were evaluated based on Shuttle derived considerations as well as comparisons with other models. The AEROBET was developed based on the Extended BET, the measured spacecraft configuration information, final mass properties, and the final Orbiter preoperation databook. The latter was updated based on aerodynamic consensus incrementals derived by the latest published FAD. The rectified predictions were compared versus the flight computed values and the resultant differences were correlated versus ensemble results for twenty-two previous STS entry flights.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin
2014-03-01
Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. © 2013 ISA Published by ISA All rights reserved.
Mapping apparent stress and energy radiation over fault zones of major earthquakes
McGarr, A.; Fletcher, Joe B.
2002-01-01
Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.
Sequenza: allele-specific copy number and mutation profiles from tumor sequencing data.
Favero, F; Joshi, T; Marquard, A M; Birkbak, N J; Krzystanek, M; Li, Q; Szallasi, Z; Eklund, A C
2015-01-01
Exome or whole-genome deep sequencing of tumor DNA along with paired normal DNA can potentially provide a detailed picture of the somatic mutations that characterize the tumor. However, analysis of such sequence data can be complicated by the presence of normal cells in the tumor specimen, by intratumor heterogeneity, and by the sheer size of the raw data. In particular, determination of copy number variations from exome sequencing data alone has proven difficult; thus, single nucleotide polymorphism (SNP) arrays have often been used for this task. Recently, algorithms to estimate absolute, but not allele-specific, copy number profiles from tumor sequencing data have been described. We developed Sequenza, a software package that uses paired tumor-normal DNA sequencing data to estimate tumor cellularity and ploidy, and to calculate allele-specific copy number profiles and mutation profiles. We applied Sequenza, as well as two previously published algorithms, to exome sequence data from 30 tumors from The Cancer Genome Atlas. We assessed the performance of these algorithms by comparing their results with those generated using matched SNP arrays and processed by the allele-specific copy number analysis of tumors (ASCAT) algorithm. Comparison between Sequenza/exome and SNP/ASCAT revealed strong correlation in cellularity (Pearson's r = 0.90) and ploidy estimates (r = 0.42, or r = 0.94 after manual inspecting alternative solutions). This performance was noticeably superior to previously published algorithms. In addition, in artificial data simulating normal-tumor admixtures, Sequenza detected the correct ploidy in samples with tumor content as low as 30%. The agreement between Sequenza and SNP array-based copy number profiles suggests that exome sequencing alone is sufficient not only for identifying small scale mutations but also for estimating cellularity and inferring DNA copy number aberrations. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology.
Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai; Ahrens, Richard C
2018-04-01
Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3-by-1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration-recommended 3-by-1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3-by-1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90-μg test dose and a 720-μg reference dose (42% cost reduction). Combining a 180-μg test dose and a 720-μg reference dose produced an estimated 36% cost reduction. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.
Fossil-Fuel C02 Emissions Database and Exploration System
NASA Astrophysics Data System (ADS)
Krassovski, M.; Boden, T.; Andres, R. J.; Blasing, T. J.
2012-12-01
The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) quantifies the release of carbon from fossil-fuel use and cement production at global, regional, and national spatial scales. The CDIAC emission time series estimates are based largely on annual energy statistics published at the national level by the United Nations (UN). CDIAC has developed a relational database to house collected data and information and a web-based interface to help users worldwide identify, explore and download desired emission data. The available information is divided in two major group: time series and gridded data. The time series data is offered for global, regional and national scales. Publications containing historical energy statistics make it possible to estimate fossil fuel CO2 emissions back to 1751. Etemad et al. (1991) published a summary compilation that tabulates coal, brown coal, peat, and crude oil production by nation and year. Footnotes in the Etemad et al.(1991) publication extend the energy statistics time series back to 1751. Summary compilations of fossil fuel trade were published by Mitchell (1983, 1992, 1993, 1995). Mitchell's work tabulates solid and liquid fuel imports and exports by nation and year. These pre-1950 production and trade data were digitized and CO2 emission calculations were made following the procedures discussed in Marland and Rotty (1984) and Boden et al. (1995). The gridded data presents annual and monthly estimates. Annual data presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2008. The monthly, fossil-fuel CO2 emissions estimates from 1950-2008 provided in this database are derived from time series of global, regional, and national fossil-fuel CO2 emissions (Boden et al. 2011), the references therein, and the methodology described in Andres et al. (2011). The data accessible here take these tabular, national, mass-emissions data and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996). This presentation introduces newly build database and web interface, reflects the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System as well as future plans for expansion.
A model-based correction for outcome reporting bias in meta-analysis.
Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula
2014-04-01
It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.
Santini, Luca; Cornulier, Thomas; Bullock, James M; Palmer, Stephen C F; White, Steven M; Hodgson, Jenny A; Bocedi, Greta; Travis, Justin M J
2016-07-01
Estimating population spread rates across multiple species is vital for projecting biodiversity responses to climate change. A major challenge is to parameterise spread models for many species. We introduce an approach that addresses this challenge, coupling a trait-based analysis with spatial population modelling to project spread rates for 15 000 virtual mammals with life histories that reflect those seen in the real world. Covariances among life-history traits are estimated from an extensive terrestrial mammal data set using Bayesian inference. We elucidate the relative roles of different life-history traits in driving modelled spread rates, demonstrating that any one alone will be a poor predictor. We also estimate that around 30% of mammal species have potential spread rates slower than the global mean velocity of climate change. This novel trait-space-demographic modelling approach has broad applicability for tackling many key ecological questions for which we have the models but are hindered by data availability. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Biogeochemical evidence of vigorous mixing in the abyssal ocean
NASA Astrophysics Data System (ADS)
Lampitt, Richard S.; Popova, Ekaterina E.; Tyrrell, Toby
2003-05-01
The metabolic activities of biological communities living at the abyssal seabed create a strong source of nutrients and a sink for oxygen. If the published estimates of vertical mixing based on instantaneous microstructure measurements are correct, near to the abyssal seabed away from rough topographic features there should be enhanced concentrations of nitrate and phosphate and depletion of oxygen. Recent data on the vertical concentration profiles of inorganic nutrients and oxygen over the bottom 1000 m of the water column (World Ocean Circulation Experiment - WOCE) provide no such evidence. It is concluded that the effective vertical mixing rates are much more vigorous than previously indicated and may even be higher than estimates of average basin scale rates based on temperature and salinity distributions. We propose that the enhanced mixing associated with rough topography influences the entire volume of the abyssal ocean on short time scales (e.g., one month - one year).
DEVELOPMENT OF A MULTIMODAL MONTE CARLO BASED TREATMENT PLANNING SYSTEM.
Kumada, Hiroaki; Takada, Kenta; Sakurai, Yoshinori; Suzuki, Minoru; Takata, Takushi; Sakurai, Hideyuki; Matsumura, Akira; Sakae, Takeji
2017-10-26
To establish boron neutron capture therapy (BNCT), the University of Tsukuba is developing a treatment device and peripheral devices required in BNCT, such as a treatment planning system. We are developing a new multimodal Monte Carlo based treatment planning system (developing code: Tsukuba Plan). Tsukuba Plan allows for dose estimation in proton therapy, X-ray therapy and heavy ion therapy in addition to BNCT because the system employs PHITS as the Monte Carlo dose calculation engine. Regarding BNCT, several verifications of the system are being carried out for its practical usage. The verification results demonstrate that Tsukuba Plan allows for accurate estimation of thermal neutron flux and gamma-ray dose as fundamental radiations of dosimetry in BNCT. In addition to the practical use of Tsukuba Plan in BNCT, we are investigating its application to other radiation therapies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
SpotCaliper: fast wavelet-based spot detection with accurate size estimation.
Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael
2016-04-15
SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
An empirical examination of WISE/NEOWISE asteroid analysis and results
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2017-10-01
Observations made by the WISE space telescope and subsequent analysis by the NEOWISE project represent the largest corpus of asteroid data to date, describing the diameter, albedo, and other properties of the ~164,000 asteroids in the collection. I present a critical reanalysis of the WISE observational data, and NEOWISE results published in numerous papers and in the JPL Planetary Data System (PDS). This analysis reveals shortcomings and a lack of clarity, both in the original analysis and in the presentation of results. The procedures used to generate NEOWISE results fall short of established thermal modelling standards. Rather than using a uniform protocol, 10 modelling methods were applied to 12 combinations of WISE band data. Over half the NEOWISE results are based on a single band of data. Most NEOWISE curve fits are poor quality, frequently missing many or all the data points. About 30% of the single-band results miss all the data; 43% of the results derived from the most common multiple-band combinations miss all the data in at least one band. The NEOWISE data processing procedures rely on inconsistent assumptions, and introduce bias by systematically discarding much of the original data. I show that error estimates for the WISE observational data have a true uncertainty factor of ~1.2 to 1.9 times larger than previously described, and that the error estimates do not fit a normal distribution. These issues call into question the validity of the NEOWISE Monte-Carlo error analysis. Comparing published NEOWISE diameters to published estimates using radar, occultation, or spacecraft measurements (ROS) reveals 150 for which the NEOWISE diameters were copied exactly from the ROS source. My findings show that the accuracy of diameter estimates for NEOWISE results depend heavily on the choice of data bands and model. Systematic errors in the diameter estimates are much larger than previously described. Systematic errors for diameters in the PDS range from -3% to +27%. Random errors range from -14% to +19% when using all four WISE bands, and from -45% to +74% in cases using only the W2 band. The results presented here show that much work remains to be done towards understanding asteroid data from WISE/NEOWISE.
Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report
2016-01-01
This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.
A review of life expectancy and infant mortality estimations for Australian Aboriginal people
2014-01-01
Background Significant variation exists in published Aboriginal mortality and life expectancy (LE) estimates due to differing and evolving methodologies required to correct for inadequate recording of Aboriginality in death data, under-counting of Aboriginal people in population censuses, and unexplained growth in the Aboriginal population attributed to changes in the propensity of individuals to identify as Aboriginal at population censuses. The objective of this paper is to analyse variation in reported Australian Aboriginal mortality in terms of LE and infant mortality rates (IMR), compared with all Australians. Methods Published data for Aboriginal LE and IMR were obtained and analysed for data quality and method of estimation. Trends in reported LE and IMR estimates were assessed and compared with those in the entire Australian population. Results LE estimates derived from different methodologies vary by as much as 7.2 years for the same comparison period. Indirect methods for estimating Aboriginal LE have produced LE estimates sensitive to small changes in underlying assumptions, some of which are subject to circular reasoning. Most indirect methods appear to under-estimate Aboriginal LE. Estimated LE gaps between Aboriginal people and the overall Australian population have varied between 11 and 20 years. Latest mortality estimates, based on linking census and death data, are likely to over-estimate Aboriginal LE. Temporal LE changes by each methodology indicate that Aboriginal LE has improved at rates similar to the Australian population overall. Consequently the gap in LE between Aboriginal people and the total Australian population appears to be unchanged since the early 1980s, and at the end of the first decade of the 21st century remains at least 11–12 years. In contrast, focussing on the 1990–2010 period Aboriginal IMR declined steeply over 2001–08, from more than 12 to around 8 deaths per 1,000 live births, the same level as Australia overall in 1993–95. The IMR gap between Aboriginal people and the total Australian population, while still unacceptable, has declined considerably, from over 8 before 2000 to around 4 per 1,000 live births by 2008. Conclusions Regardless of estimation method used, mortality and LE gaps between Aboriginal and non-Aboriginal people are substantial, but remain difficult to estimate accurately. PMID:24383435
Al-Gindan, Yasmin Y.; Hankey, Catherine R.; Govan, Lindsay; Gallagher, Dympna; Heymsfield, Steven B.; Lean, Michael E. J.
2017-01-01
The reference organ-level body composition measurement method is MRI. Practical estimations of total adipose tissue mass (TATM), total adipose tissue fat mass (TATFM) and total body fat are valuable for epidemiology, but validated prediction equations based on MRI are not currently available. We aimed to derive and validate new anthropometric equations to estimate MRI-measured TATM/TATFM/total body fat and compare them with existing prediction equations using older methods. The derivation sample included 416 participants (222 women), aged between 18 and 88 years with BMI between 15·9 and 40·8 (kg/m2). The validation sample included 204 participants (110 women), aged between 18 and 86 years with BMI between 15·7 and 36·4 (kg/m2). Both samples included mixed ethnic/racial groups. All the participants underwent whole-body MRI to quantify TATM (dependent variable) and anthropometry (independent variables). Prediction equations developed using stepwise multiple regression were further investigated for agreement and bias before validation in separate data sets. Simplest equations with optimal R2 and Bland–Altman plots demonstrated good agreement without bias in the validation analyses: men: TATM (kg) = 0·198 weight (kg) + 0·478 waist (cm) − 0·147 height (cm) − 12·8 (validation: R2 0·79, CV = 20 %, standard error of the estimate (SEE)=3·8 kg) and women: TATM (kg)=0·789 weight (kg) + 0·0786 age (years) − 0·342 height (cm) + 24·5 (validation: R2 0·84, CV = 13 %, SEE = 3·0 kg). Published anthropometric prediction equations, based on MRI and computed tomographic scans, correlated strongly with MRI-measured TATM: (R2 0·70 – 0·82). Estimated TATFM correlated well with published prediction equations for total body fat based on underwater weighing (R2 0·70–0·80), with mean bias of 2·5–4·9 kg, correctable with log-transformation in most equations. In conclusion, new equations, using simple anthropometric measurements, estimated MRI-measured TATM with correlations and agreements suitable for use in groups and populations across a wide range of fatness. PMID:26435103
Modeling and simulation of soft sensor design for real-time speed and position estimation of PMSM.
Omrane, Ines; Etien, Erik; Dib, Wissam; Bachelier, Olivier
2015-07-01
This paper deals with the design of a speed soft sensor for permanent magnet synchronous motor. At high speed, model-based soft sensor is used and it gives excellent results. However, it fails to deliver satisfactory performance at zero or very low speed. High-frequency soft sensor is used at low speed. We suggest to use a model-based soft sensor together with the high-frequency soft sensor to overcome the limitations of the first one at low speed range. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Zhong, Zhixiong; Zhu, Yanzheng; Ahn, Choon Ki
2018-07-01
In this paper, we address the problem of reachable set estimation for continuous-time Takagi-Sugeno (T-S) fuzzy systems subject to unknown output delays. Based on the reachable set concept, a new controller design method is also discussed for such systems. An effective method is developed to attenuate the negative impact from the unknown output delays, which likely degrade the performance/stability of systems. First, an augmented fuzzy observer is proposed to capacitate a synchronous estimation for the system state and the disturbance term owing to the unknown output delays, which ensures that the reachable set of the estimation error is limited via the intersection operation of ellipsoids. Then, a compensation technique is employed to eliminate the influence on the system performance stemmed from the unknown output delays. Finally, the effectiveness and correctness of the obtained theories are verified by the tracking control of autonomous underwater vehicles. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Stochastic stability of sigma-point Unscented Predictive Filter.
Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong
2015-07-01
In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
French, Michael T.; Popovici, Ioana; Tapsell, Lauren
2008-01-01
Federal, State, and local government agencies require current and accurate cost information for publicly funded substance abuse treatment programs to guide program assessments and reimbursement decisions. The Center for Substance Abuse Treatment (CSAT) published a list of modality-specific cost bands for this purpose in 2002. However, the upper and lower values in these ranges are so wide that they offer little practical guidance for funding agencies. Thus, the dual purpose of this investigation was to assemble the most current and comprehensive set of economic cost estimates from the readily-available literature and then use these estimates to develop updated modality-specific cost bands for more reasonable reimbursement policies. Although cost estimates were scant for some modalities, the recommended cost bands are based on the best available economic research, and we believe these new ranges will be more useful and pertinent for all stakeholders of publicly-funded substance abuse treatment. PMID:18294803
Kovalchik, Stephanie A; Cumberland, William G
2012-05-01
Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment-covariate interaction in an individual patient data (IPD) meta-analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient-level data. We present methods that use aggregate data to estimate the standard error of an IPD meta-analysis' treatment-covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta-regression illustrates the practical utility of the methodology. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Validation of equations for pleural effusion volume estimation by ultrasonography.
Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed
2017-12-01
To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H + D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.
Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B
2017-11-28
Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
[Dental manpower prediction in Israel for 2017].
Vered, Y; Zini, A; Mann, J
2010-07-01
A recent study published by the authors indicated that according to the Israeli Central Bureau of Statistics in 2008, Israel had 5800 active dentists, a figure well below the publication by the Ministry of Health. Based on this figure, using the manpower to population ratio method, the following results were obtained: The predicted number of dentist in 2017 would be 6090, based on, the estimated number of Israel: graduates, the estimated number of dentists who would arrive in Israel as immigrants or Israelis who studied abroad, based on an attrition rate of 3% and on the assumption that the number of dentists leaving the country is negligible. Table 2, based on manpower to population ratio, indicates that by 2017, Israel would have 1 dentist per 1400 population, a ratio which is still far above what many countries present, but high for Israel. This might reflect a dramatic change, from employment in public clinics, back to private practices. The results clearly indicate that a shortage of dentists is predicted in the near future and a major brainstorming is urgently required to evaluate these results.
Stevenson, Matt; Pandor, Abdullah; Martyn-St James, Marrissa; Rafia, Rachid; Uttley, Lesley; Stevens, John; Sanderson, Jean; Wong, Ruth; Perkins, Gavin D; McMullan, Ronan; Dark, Paul
2016-06-01
Sepsis can lead to multiple organ failure and death. Timely and appropriate treatment can reduce in-hospital mortality and morbidity. To determine the clinical effectiveness and cost-effectiveness of three tests [LightCycler SeptiFast Test MGRADE(®) (Roche Diagnostics, Risch-Rotkreuz, Switzerland); SepsiTest(TM) (Molzym Molecular Diagnostics, Bremen, Germany); and the IRIDICA BAC BSI assay (Abbott Diagnostics, Lake Forest, IL, USA)] for the rapid identification of bloodstream bacteria and fungi in patients with suspected sepsis compared with standard practice (blood culture with or without matrix-absorbed laser desorption/ionisation time-of-flight mass spectrometry). Thirteen electronic databases (including MEDLINE, EMBASE and The Cochrane Library) were searched from January 2006 to May 2015 and supplemented by hand-searching relevant articles. A systematic review and meta-analysis of effectiveness studies were conducted. A review of published economic analyses was undertaken and a de novo health economic model was constructed. A decision tree was used to estimate the costs and quality-adjusted life-years (QALYs) associated with each test; all other parameters were estimated from published sources. The model was populated with evidence from the systematic review or individual studies, if this was considered more appropriate (base case 1). In a secondary analysis, estimates (based on experience and opinion) from seven clinicians regarding the benefits of earlier test results were sought (base case 2). A NHS and Personal Social Services perspective was taken, and costs and benefits were discounted at 3.5% per annum. Scenario analyses were used to assess uncertainty. For the review of diagnostic test accuracy, 62 studies of varying methodological quality were included. A meta-analysis of 54 studies comparing SeptiFast with blood culture found that SeptiFast had an estimated summary specificity of 0.86 [95% credible interval (CrI) 0.84 to 0.89] and sensitivity of 0.65 (95% CrI 0.60 to 0.71). Four studies comparing SepsiTest with blood culture found that SepsiTest had an estimated summary specificity of 0.86 (95% CrI 0.78 to 0.92) and sensitivity of 0.48 (95% CrI 0.21 to 0.74), and four studies comparing IRIDICA with blood culture found that IRIDICA had an estimated summary specificity of 0.84 (95% CrI 0.71 to 0.92) and sensitivity of 0.81 (95% CrI 0.69 to 0.90). Owing to the deficiencies in study quality for all interventions, diagnostic accuracy data should be treated with caution. No randomised clinical trial evidence was identified that indicated that any of the tests significantly improved key patient outcomes, such as mortality or duration in an intensive care unit or hospital. Base case 1 estimated that none of the three tests provided a benefit to patients compared with standard practice and thus all tests were dominated. In contrast, in base case 2 it was estimated that all cost per QALY-gained values were below £20,000; the IRIDICA BAC BSI assay had the highest estimated incremental net benefit, but results from base case 2 should be treated with caution as these are not evidence based. Robust data to accurately assess the clinical effectiveness and cost-effectiveness of the interventions are currently unavailable. The clinical effectiveness and cost-effectiveness of the interventions cannot be reliably determined with the current evidence base. Appropriate studies, which allow information from the tests to be implemented in clinical practice, are required. This study is registered as PROSPERO CRD42015016724. The National Institute for Health Research Health Technology Assessment programme.
Garrison, Louis P; Lewin, Jack; Young, Christopher H; Généreux, Philippe; Crittendon, Janna; Mann, Marita R; Brindis, Ralph G
2015-01-01
Coronary artery calcification (CAC) is a well-established risk factor for the occurrence of adverse ischemic events. However, the economic impact of the presence of CAC is unknown. Through an economic model analysis, we sought to estimate the incremental impact of CAC on medical care costs and patient mortality for de novo percutaneous coronary intervention (PCI) patients in the 2012 cohort of the Medicare elderly (≥65) population. This aggregate burden-of-illness study is incidence-based, focusing on cost and survival outcomes for an annual Medicare cohort based on the recently introduced ICD9 code for CAC. The cost analysis uses a one-year horizon, and the survival analysis considers lost life years and their economic value. For calendar year 2012, an estimated 200,945 index (de novo) PCI procedures were performed in this cohort. An estimated 16,000 Medicare beneficiaries (7.9%) were projected to have had severe CAC, generating an additional cost in the first year following their PCI of $3500, on average, or $56 million in total. In terms of mortality, the model projects that an additional 397 deaths would be attributable to severe CAC in 2012, resulting in 3770 lost life years, representing an estimated loss of about $377 million, when valuing lost life years at $100,000 each. These model-based CAC estimates, considering both moderate and severe CAC patients, suggest an annual burden of illness approaching $1.3 billion in this PCI cohort. The potential clinical and cost consequences of CAC warrant additional clinical and economic attention not only on PCI strategies for particular patients but also on reporting and coding to achieve better evidence-based decision-making. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Revised spatially distributed global livestock emissions
NASA Astrophysics Data System (ADS)
Asrar, G.; Wolf, J.; West, T. O.
2015-12-01
Livestock play an important role in agricultural carbon cycling through consumption of biomass and emissions of methane. Quantification and spatial distribution of methane and carbon dioxide produced by livestock is needed to develop bottom-up estimates for carbon monitoring. These estimates serve as stand-alone international emissions estimates, as input to global emissions modeling, and as comparisons or constraints to flux estimates from atmospheric inversion models. Recent results for the US suggest that the 2006 IPCC default coefficients may underestimate livestock methane emissions. In this project, revised coefficients were calculated for cattle and swine in all global regions, based on reported changes in body mass, quality and quantity of feed, milk production, and management of living animals and manure for these regions. New estimates of livestock methane and carbon dioxide emissions were calculated using the revised coefficients and global livestock population data. Spatial distribution of population data and associated fluxes was conducted using the MODIS Land Cover Type 5, version 5.1 (i.e. MCD12Q1 data product), and a previously published downscaling algorithm for reconciling inventory and satellite-based land cover data at 0.05 degree resolution. Preliminary results for 2013 indicate greater emissions than those calculated using the IPCC 2006 coefficients. Global total enteric fermentation methane increased by 6%, while manure management methane increased by 38%, with variation among species and regions resulting in improved spatial distributions of livestock emissions. These new estimates of total livestock methane are comparable to other recently reported studies for the entire US and the State of California. These new regional/global estimates will improve the ability to reconcile top-down and bottom-up estimates of methane production as well as provide updated global estimates for use in development and evaluation of Earth system models.
The economic burden of child sexual abuse in the United States.
Letourneau, Elizabeth J; Brown, Derek S; Fang, Xiangming; Hassan, Ahmed; Mercy, James A
2018-05-01
The present study provides an estimate of the U.S. economic impact of child sexual abuse (CSA). Costs of CSA were measured from the societal perspective and include health care costs, productivity losses, child welfare costs, violence/crime costs, special education costs, and suicide death costs. We separately estimated quality-adjusted life year (QALY) losses. For each category, we used the best available secondary data to develop cost per case estimates. All costs were estimated in U.S. dollars and adjusted to the reference year 2015. Estimating 20 new cases of fatal and 40,387 new substantiated cases of nonfatal CSA that occurred in 2015, the lifetime economic burden of CSA is approximately $9.3 billion, the lifetime cost for victims of fatal CSA per female and male victim is on average $1,128,334 and $1,482,933, respectively, and the average lifetime cost for victims of nonfatal CSA is of $282,734 per female victim. For male victims of nonfatal CSA, there was insufficient information on productivity losses, contributing to a lower average estimated lifetime cost of $74,691 per male victim. If we included QALYs, these costs would increase by approximately $40,000 per victim. With the exception of male productivity losses, all estimates were based on robust, replicable incidence-based costing methods. The availability of accurate, up-to-date estimates should contribute to policy analysis, facilitate comparisons with other public health problems, and support future economic evaluations of CSA-specific policy and practice. In particular, we hope the availability of credible and contemporary estimates will support increased attention to primary prevention of CSA. Copyright © 2018. Published by Elsevier Ltd.
Deschamps, Kevin; Eerdekens, Maarten; Desmet, Dirk; Matricali, Giovanni Arnoldo; Wuite, Sander; Staes, Filip
2017-08-16
Recent studies which estimated foot segment kinetic patterns were found to have inconclusive data on one hand, and did not dissociate the kinetics of the chopart and lisfranc joint. The current study aimed therefore at reproducing independent, recently published three-segment foot kinetic data (Study 1) and in a second stage expand the estimation towards a four-segment model (Study 2). Concerning the reproducibility study, two recently published three segment foot models (Bruening et al., 2014; Saraswat et al., 2014) were reproduced and kinetic parameters were incorporated in order to calculate joint moments and powers of paediatric cohorts during gait. Ground reaction forces were measured with an integrated force/pressure plate measurement set-up and a recently published proportionality scheme was applied to determine subarea total ground reaction forces. Regarding Study 2, moments and powers were estimated with respect to the Instituto Ortopedico Rizzoli four-segment model. The proportionality scheme was expanded in this study and the impact of joint centre location on kinetic data was evaluated. Findings related to Study 1 showed in general good agreement with the kinetic data published by Bruening et al. (2014). Contrarily, the peak ankle, midfoot and hallux powers published by Saraswat et al. (2014) are disputed. Findings of Study 2 revealed that the chopart joint encompasses both power absorption and generation, whereas the Lisfranc joint mainly contributes to power generation. The results highlights the necessity for further studies in the field of foot kinetic models and provides a first estimation of the kinetic behaviour of the Lisfranc joint. Copyright © 2017 Elsevier Ltd. All rights reserved.
Piñol, C
2016-05-01
The BENEFIT study has demonstrated the benefits of early treatment with interferon beta 1b (IFNβ-1b). The objective of this study was to estimate the efficiency of early vs delayed IFNβ-1b treatment in patients with clinically isolated syndrome (CIS) suggestive of multiple sclerosis (MS) in Spain. A Markov model reflecting the social perspective was developed with time horizons ranging from 2 years to lifetime. A cohort of 1000 patients with CIS, whose health status had been measured on the Expanded Disability Symptom Scale (EDSS), included patients who received early IFNβ-1b treatment and those who did not. Data from the BENEFIT study were used to model EDSS progression and transitions to MS. Costs were estimated from published literature. Patient utilities were derived from EQ-5D data and published data. Mortality was estimated using life tables and EDSS data. Costs (€ at 2013 rates) and outcomes were discounted at 3% per annum. A probabilistic sensitivity analysis was performed. In the base case, both the incremental cost utility ratio (ICUR) and the incremental cost effectiveness ratio (ICER) of IFNβ-1b versus no treatment were dominant (more effective and less costly) from a social perspective. From the perspective of the Spanish Health System, the ICUR was € 40,702/QALY and the ICER was € 13/relapse avoided. Early treatment with IFNβ-1b after a CIS versus delayed treatment is efficient from a social perspective, but it may not be efficient from the perspective of the NHS which does not take non health-related costs into account. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.
Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc
2016-09-08
Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Revealing nonergodic dynamics in living cells from a single particle trajectory
NASA Astrophysics Data System (ADS)
Lanoiselée, Yann; Grebenkov, Denis S.
2016-05-01
We propose the improved ergodicity and mixing estimators to identify nonergodic dynamics from a single particle trajectory. The estimators are based on the time-averaged characteristic function of the increments and can thus capture additional information on the process as compared to the conventional time-averaged mean-square displacement. The estimators are first investigated and validated for several models of anomalous diffusion, such as ergodic fractional Brownian motion and diffusion on percolating clusters, and nonergodic continuous-time random walks and scaled Brownian motion. The estimators are then applied to two sets of earlier published trajectories of mRNA molecules inside live Escherichia coli cells and of Kv2.1 potassium channels in the plasma membrane. These statistical tests did not reveal nonergodic features in the former set, while some trajectories of the latter set could be classified as nonergodic. Time averages along such trajectories are thus not representative and may be strongly misleading. Since the estimators do not rely on ensemble averages, the nonergodic features can be revealed separately for each trajectory, providing a more flexible and reliable analysis of single-particle tracking experiments in microbiology.
Moreo, Michael T.; Justet, Leigh
2008-01-01
Ground-water withdrawal estimates from 1913 through 2003 for the Death Valley regional ground-water flow system are compiled in an electronic database to support a regional, three-dimensional, transient ground-water flow model. This database updates a previously published database that compiled estimates of ground-water withdrawals for 1913-1998. The same methodology is used to construct each database. Primary differences between the 2 databases are an additional 5 years of ground-water withdrawal data, well locations in the updated database are restricted to Death Valley regional ground-water flow system model boundary, and application rates are from 0 to 1.5 feet per year lower than original estimates. The lower application rates result from revised estimates of crop consumptive use, which are based on updated estimates of potential evapotranspiration. In 2003, about 55,700 acre-feet of ground water was pumped in the DVRFS, of which 69 percent was used for irrigation, 13 percent for domestic, and 18 percent for public supply, commercial, and mining activities.
Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi
2017-05-01
Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ADHD and math - The differential effect on calculation and estimation.
Ganor-Stern, Dana; Steinhorn, Ofir
2018-05-31
Adults with ADHD were compared to controls when solving multiplication problems exactly and when estimating the results of multidigit multiplication problems relative to reference numbers. The ADHD participants were slower than controls in the exact calculation and in the estimation tasks, but not less accurate. The ADHD participants were similar to controls in showing enhanced accuracy and speed for smaller problem sizes, for trials in which the reference numbers were smaller (vs. larger) than the exact answers and for reference numbers that were far (vs. close) from the exact answer. The two groups similarly used the approximated calculation and the sense of magnitude strategies. They differed however in strategy execution, mainly of the approximated calculation strategy, which requires working memory resources. The increase in reaction time associated with using the approximated calculation strategy was larger for the ADHD compared to the control participants. Thus, ADHD seems to selectively impair calculation processes in estimation tasks that rely on working memory, but it does not hamper estimation skills that are based on sense of magnitude. The educational implications of these findings are discussed. Copyright © 2018. Published by Elsevier B.V.
Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares
NASA Astrophysics Data System (ADS)
Heidari, Manoutchehr; Wench, Allen
1997-05-01
Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.
Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares
Heidari, M.; Moench, A.
1997-01-01
Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.
Chao, Anne; Chiu, Chun-Huo; Colwell, Robert K; Magnago, Luiz Fernando S; Chazdon, Robin L; Gotelli, Nicholas J
2017-11-01
Estimating the species, phylogenetic, and functional diversity of a community is challenging because rare species are often undetected, even with intensive sampling. The Good-Turing frequency formula, originally developed for cryptography, estimates in an ecological context the true frequencies of rare species in a single assemblage based on an incomplete sample of individuals. Until now, this formula has never been used to estimate undetected species, phylogenetic, and functional diversity. Here, we first generalize the Good-Turing formula to incomplete sampling of two assemblages. The original formula and its two-assemblage generalization provide a novel and unified approach to notation, terminology, and estimation of undetected biological diversity. For species richness, the Good-Turing framework offers an intuitive way to derive the non-parametric estimators of the undetected species richness in a single assemblage, and of the undetected species shared between two assemblages. For phylogenetic diversity, the unified approach leads to an estimator of the undetected Faith's phylogenetic diversity (PD, the total length of undetected branches of a phylogenetic tree connecting all species), as well as a new estimator of undetected PD shared between two phylogenetic trees. For functional diversity based on species traits, the unified approach yields a new estimator of undetected Walker et al.'s functional attribute diversity (FAD, the total species-pairwise functional distance) in a single assemblage, as well as a new estimator of undetected FAD shared between two assemblages. Although some of the resulting estimators have been previously published (but derived with traditional mathematical inequalities), all taxonomic, phylogenetic, and functional diversity estimators are now derived under the same framework. All the derived estimators are theoretically lower bounds of the corresponding undetected diversities; our approach reveals the sufficient conditions under which the estimators are nearly unbiased, thus offering new insights. Simulation results are reported to numerically verify the performance of the derived estimators. We illustrate all estimators and assess their sampling uncertainty with an empirical dataset for Brazilian rain forest trees. These estimators should be widely applicable to many current problems in ecology, such as the effects of climate change on spatial and temporal beta diversity and the contribution of trait diversity to ecosystem multi-functionality. © 2017 by the Ecological Society of America.
Forecasting the future burden of opioids for osteoarthritis.
Ackerman, I N; Zomer, E; Gilmartin-Thomas, J F-M; Liew, D
2018-03-01
To quantify the current national burden of opioids for osteoarthritis (OA) pain in Australia in terms of number of dispensed opioid prescriptions and associated costs, and to forecast the likely burden to the year 2030/31. Epidemiological modelling. Published data were obtained on rates of opioid prescribing for people with OA and national OA prevalence projections. Trends in opioid dispensing from 2006 to 2016, and average costs for common opioid subtypes were obtained from the Pharmaceutical Benefits Scheme and Medicare Australia Statistics. Using these inputs, a model was developed to estimate the likely number of dispensed opioid prescriptions and costs to the public healthcare system by 2030/31. In 2015/16, an estimated 1.1 million opioid prescriptions were dispensed in Australia for 403,954 people with OA (of a total 2.2 million Australians with OA). Based on recent dispensing trends and OA prevalence projections, the number of dispensed opioid prescriptions is expected to nearly triple to 3,032,332 by 2030/31, for an estimated 562,610 people with OA. The estimated cost to the Australian healthcare system was $AUD25.2 million in 2015/16, rising to $AUD72.4 million by 2030/31. OA-related opioid dispensing and associated costs are set to increase substantially in Australia from 2015/16 to 2030/31. Use of opioids for OA pain is concerning given joint disease chronicity and the risk of adverse events, particularly among older people. These projections represent a conservative estimate of the full financial burden given additional costs associated with opioid-related harms and out-of-pocket costs borne by patients. Copyright © 2017 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Hatswell, Anthony J; Thompson, Gwilym J; Maroudas, Penny A; Sofrygin, Oleg; Delea, Thomas E
2017-01-01
Ofatumumab (Arzerra ® , Novartis) is a treatment for chronic lymphocytic leukemia refractory to fludarabine and alemtuzumab [double refractory (DR-CLL)]. Ofatumumab was licensed on the basis of an uncontrolled Phase II study, Hx-CD20-406, in which patients receiving ofatumumab survived for a median of 13.9 months. However, the lack of an internal control arm presents an obstacle for the estimation of comparative effectiveness. The objective of the study was to present a method to estimate the cost effectiveness of ofatumumab in the treatment of DR-CLL. As no suitable historical control was available for modelling, the outcomes from non-responders to ofatumumab were used to model the effect of best supportive care (BSC). This was done via a Cox regression to control for differences in baseline characteristics between groups. This analysis was included in a partitioned survival model built in Microsoft ® Excel with utilities and costs taken from published sources, with costs and quality-adjusted life years (QALYs) were discounted at a rate of 3.5% per annum. Using the outcomes seen in non-responders, ofatumumab is expected to add approximately 0.62 life years (1.50 vs. 0.88). Using published utility values this translates to an additional 0.30 QALYs (0.77 vs. 0.47). At the list price, ofatumumab had a cost per QALY of £130,563, and a cost per life year of £63,542. The model was sensitive to changes in assumptions regarding overall survival estimates and utility values. This study demonstrates the potential of using data for non-responders to model outcomes for BSC in cost-effectiveness evaluations based on single-arm trials. Further research is needed on the estimation of comparative effectiveness using uncontrolled clinical studies.
An emperor penguin population estimate: the first global, synoptic survey of a species from space.
Fretwell, Peter T; Larue, Michelle A; Morin, Paul; Kooyman, Gerald L; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J; Fleming, Andrew H; Porter, Claire; Trathan, Phil N
2012-01-01
Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm.We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ~238,000 breeding pairs (compared with the last previously published count of 135,000-175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ~595,000 adult birds.There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research.
Estimates of alcohol-related oesophageal cancer burden in Japan: systematic review and meta-analyses
Shield, Kevin D; Higuchi, Susumu; Yoshimura, Atsushi; Larsen, Elisabeth; Rehm, Maximilien X; Rehm, Jürgen
2015-01-01
Abstract Objective To refine estimates of the burden of alcohol-related oesophageal cancer in Japan. Methods We searched PubMed for published reviews and original studies on alcohol intake, aldehyde dehydrogenase polymorphisms, and risk for oesophageal cancer in Japan, published before 2014. We conducted random-effects meta-analyses, including subgroup analyses by aldehyde dehydrogenase variants. We estimated deaths and loss of disability-adjusted life years (DALYs) from oesophageal cancer using exposure distributions for alcohol based on age, sex and relative risks per unit of exposure. Findings We identified 14 relevant studies. Three cohort studies and four case-control studies had dose–response data. Evidence from cohort studies showed that people who consumed the equivalent of 100 g/day of pure alcohol had an 11.71 fold, (95% confidence interval, CI: 2.67–51.32) risk of oesophageal cancer compared to those who never consumed alcohol. Evidence from case-control studies showed that the increase in risk was 33.11 fold (95% CI: 8.15–134.43) in the population at large. The difference by study design is explained by the 159 fold (95% CI: 27.2–938.2) risk among those with an inactive aldehyde dehydrogenase enzyme variant. Applying these dose–response estimates to the national profile of alcohol intake yielded 5279 oesophageal cancer deaths and 102 988 DALYs lost – almost double the estimates produced by the most recent global burden of disease exercise. Conclusion Use of global dose–response data results in an underestimate of the burden of disease from oesophageal cancer in Japan. Where possible, national burden of disease studies should use results from the population concerned. PMID:26229204
An Emperor Penguin Population Estimate: The First Global, Synoptic Survey of a Species from Space
Fretwell, Peter T.; LaRue, Michelle A.; Morin, Paul; Kooyman, Gerald L.; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J.; Fleming, Andrew H.; Porter, Claire; Trathan, Phil N.
2012-01-01
Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm. We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ∼238,000 breeding pairs (compared with the last previously published count of 135,000–175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ∼595,000 adult birds. There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research. PMID:22514609
Nishiura, Hiroshi; Inaba, Hisashi
2011-03-07
Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.
Richardson, Michael L; Petscavage, Jonelle M
2011-11-01
The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.