Sample records for rate estimates based

  1. Estimating roadside encroachment rates with the combined strengths of accident- and encroachment-based approaches

    DOT National Transportation Integrated Search

    2001-09-01

    In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...

  2. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    NASA Astrophysics Data System (ADS)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Kong, Weidong; Brundage, Michael

    Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT{sub 1Y}) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT{sub LIFETIME}) was estimated by the multicohort utilization tablemore » method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT{sub 1Y}, observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT{sub LIFETIME} was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT{sub LIFETIME} for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT{sub LIFETIME} for 5 selected sites differed widely. RT{sub LIFETIME} in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario.« less

  4. Individual-Based Completion Rates for Apprentices. Technical Paper

    ERIC Educational Resources Information Center

    Karmel, Tom

    2011-01-01

    Low completion rates for apprentices and trainees have received considerable attention recently and it has been argued that NCVER seriously understates completion rates. In this paper Tom Karmel uses NCVER data on recommencements to estimate individual-based completion rates. It is estimated that around one-quarter of trade apprentices swap…

  5. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Queen, Steven Z.; VanEepoel, John M.; Sanner, Robert M.

    2005-01-01

    During the Hubble Robotic Servicing Mission, the Hubble Space Telescope (HST) attitude and rates are necessary to achieve the capture of HST by the Hubble Robotic Vehicle (HRV). The attitude and rates must be determined without the HST gyros or HST attitude estimates. The HRV will be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. Second, a linearized approach is developed. The linearized approach is based on more traditional Extended Kalman filter techniques. Simulation test results for both methods are given.

  6. On the unified estimation of turbulence eddy dissipation rate using Doppler cloud radars and lidars: Radar and Lidar Turbulence Estimation

    DOE PAGES

    Borque, Paloma; Luke, Edward; Kollias, Pavlos

    2016-05-27

    Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less

  7. On the unified estimation of turbulence eddy dissipation rate using Doppler cloud radars and lidars: Radar and Lidar Turbulence Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borque, Paloma; Luke, Edward; Kollias, Pavlos

    Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less

  8. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  9. Two Approaches to Estimation of Classification Accuracy Rate under Item Response Theory

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2013-01-01

    Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…

  10. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  11. Estimated rates of groundwater recharge to the Chicot, Evangeline and Jasper aquifers by using environmental tracers in Montgomery and adjacent counties, Texas, 2008 and 2011

    USGS Publications Warehouse

    Oden, Timothy D.; Truini, Margot

    2013-01-01

    Recharge rates estimated from environmental tracer data are dependent upon several hydrogeologic variables and have inherent uncertainties. By using the recharge estimates derived from samples collected from 14 wells completed in the Chicot aquifer for which apparent groundwater ages could be determined, recharge to the Chicot aquifer ranged from 0.2 to 7.2 inches (in.) per year (yr). Based on data from one well, estimated recharge to the unconfined zone of the Evangeline aquifer (outcrop) was 0.1 in./yr. Based on data collected from eight wells, estimated rates of recharge to the confined zone of the Evangeline aquifer ranged from less than 0.1 to 2.8 in./yr. Based on data from one well, estimated recharge to the unconfined zone of the Jasper aquifer (outcrop) was 0.5 in./yr. Based on data collected from nine wells, estimated rates of recharge to the confined zone of the Jasper aquifer ranged from less than 0.1 to 0.1 in./yr. The complexity of the hydrogeology in the area, uncertainty in the conceptual model, and numerical assumptions required in the determination of the recharge rates all pose limitations and need to be considered when evaluating these data on a countywide or regional scale. The estimated recharge rates calculated for this study are specific to each well location and should not be extrapolated or inferred as a countywide average. Local variations in the hydrogeology and surficial conditions can affect the recharge rate at a local scale.

  12. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  13. A rate-constrained fast full-search algorithm based on block sum pyramid.

    PubMed

    Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom

    2005-03-01

    This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.

  14. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Using Appendicitis to Improve Estimates of Childhood Medicaid Participation Rates.

    PubMed

    Silber, Jeffrey H; Zeigler, Ashley E; Reiter, Joseph G; Hochman, Lauren L; Ludwig, Justin M; Wang, Wei; Calhoun, Shawna R; Pati, Susmita

    2018-03-23

    Administrative data are often used to estimate state Medicaid/Children's Health Insurance Program duration of enrollment and insurance continuity, but they are generally not used to estimate participation (the fraction of eligible children enrolled) because administrative data do not include reasons for disenrollment and cannot observe eligible never-enrolled children, causing estimates of eligible unenrolled to be inaccurate. Analysts are therefore forced to either utilize survey information that is not generally linkable to administrative claims or rely on duration and continuity measures derived from administrative data and forgo estimating claims-based participation. We introduce appendectomy-based participation (ABP) to estimate statewide participation rates using claims by taking advantage of a natural experiment around statewide appendicitis admissions to improve the accuracy of participation rate estimates. We used Medicaid Analytic eXtract (MAX) for 2008-2010; and the American Community Survey for 2008-2010 from 43 states to calculate ABP, continuity ratio, duration, and participation based on the American Community Survey (ACS). In the validation study, median participation rate using ABP was 86% versus 87% for ACS-based participation estimates using logical edits and 84% without logical edits. Correlations between ABP and ACS with or without logical edits was 0.86 (P < .0001). Using regression analysis, ABP alone was a significant predictor of ACS (P < .0001) with or without logical edits, and adding duration and/or the continuity ratio did not significantly improve the model. Using the ABP rate derived from administrative claims (MAX) is a valid method to estimate statewide public insurance participation rates in children. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  16. Passive fetal heart rate monitoring apparatus and method with enhanced fetal heart beat discrimination

    NASA Technical Reports Server (NTRS)

    Zahorian, Stephen A. (Inventor); Livingston, David L. (Inventor); Pretlow, III, Robert A. (Inventor)

    1996-01-01

    An apparatus for acquiring signals emitted by a fetus, identifying fetal heart beats and determining a fetal heart rate. Multiple sensor signals are outputted by a passive fetal heart rate monitoring sensor. Multiple parallel nonlinear filters filter these multiple sensor signals to identify fetal heart beats in the signal data. A processor determines a fetal heart rate based on these identified fetal heart beats. The processor includes the use of a figure of merit weighting of heart rate estimates based on the identified heart beats from each filter for each signal. The fetal heart rate thus determined is outputted to a display, storage, or communications channel. A method for enhanced fetal heart beat discrimination includes acquiring signals from a fetus, identifying fetal heart beats from the signals by multiple parallel nonlinear filtering, and determining a fetal heart rate based on the identified fetal heart beats. A figure of merit operation in this method provides for weighting a plurality of fetal heart rate estimates based on the identified fetal heart beats and selecting the highest ranking fetal heart rate estimate.

  17. Passive fetal heart rate monitoring apparatus and method with enhanced fetal heart beat discrimination

    NASA Technical Reports Server (NTRS)

    Zahorian, Stephen A. (Inventor); Livingston, David L. (Inventor); Pretlow, Robert A., III (Inventor)

    1994-01-01

    An apparatus for acquiring signals emitted by a fetus, identifying fetal heart beats and determining a fetal heart rate is presented. Multiple sensor signals are outputted by a passive fetal heart rate monitoring sensor. Multiple parallel nonlinear filters filter these multiple sensor signals to identify fetal heart beats in the signal data. A processor determines a fetal heart rate based on these identified fetal heart beats. The processor includes the use of a figure of merit weighting of heart rate estimates based on the identified heart beats from each filter for each signal. The fetal heart rate thus determined is outputted to a display, storage, or communications channel. A method for enhanced fetal heart beat discrimination includes acquiring signals from a fetus, identifying fetal heart beats from the signals by multiple parallel nonlinear filtering, and determining a fetal heart rate based on the identified fetal heart beats. A figure of merit operation in this method provides for weighting a plurality of fetal heart rate estimates based on the identified fetal heart beats and selecting the highest ranking fetal heart rate estimate.

  18. Satellite angular velocity estimation based on star images and optical flow techniques.

    PubMed

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-09-25

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  19. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    PubMed Central

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-01-01

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023

  20. Influence of thyroid function on glomerular filtration rate and other estimates of kidney function in two pediatric patients.

    PubMed

    Uemura, Osamu; Iwata, Naoyuki; Nagai, Takuhito; Yamakawa, Satoshi; Hibino, Satoshi; Yamamoto, Masaki; Nakano, Masaru; Tanaka, Kazuki

    2018-05-01

    To determine the optimal method of evaluating kidney function in patients with thyroid dysfunction, this study compared the estimated glomerular filtration rate derived from serum creatinine, cystatin C, or β2-microglobulin with inulin or creatinine clearance in two pediatric patients, one with hypothyroidism and the other with hyperthyroidism. It was observed that the kidney function decreased in a hypothyroid child and enhanced in a hyperthyroid child, with their kidney function becoming normalized by treatment with drugs, which normalized their thyroid function. Kidney function cannot be accurately evaluated using cystatin C-based or β2-microglobulin-based estimated glomerular filtration rate in patients with thyroid dysfunction, as these tests overestimated glomerular filtration rate in a patient with hypothyroidism and underestimated glomerular filtration rate in a patient with hyperthyroidism, perhaps through a metabolic rate-mediated mechanism. In both our patients, 24-h urinary creatinine secretion was identical before and after treatment, suggesting that creatinine production is not altered in patients with thyroid dysfunction. Therefore, kidney function in patients with thyroid dysfunction should be evaluated using creatinine-based estimated glomerular filtration rate.

  1. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  2. Estimating evaporative vapor generation from automobiles based on parking activities.

    PubMed

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S

    2015-07-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A Bayesian approach to infer nitrogen loading rates from crop and land-use types surrounding private wells in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, Katherine M.; Bell, Andrew M.; Barber, Quinn E.; Kourakos, George; Harter, Thomas

    2018-05-01

    This study is focused on nitrogen loading from a wide variety of crop and land-use types in the Central Valley, California, USA, an intensively farmed region with high agricultural crop diversity. Nitrogen loading rates for several crop types have been measured based on field-scale experiments, and recent research has calculated nitrogen loading rates for crops throughout the Central Valley based on a mass balance approach. However, research is lacking to infer nitrogen loading rates for the broad diversity of crop and land-use types directly from groundwater nitrate measurements. Relating groundwater nitrate measurements to specific crops must account for the uncertainty about and multiplicity in contributing crops (and other land uses) to individual well measurements, and for the variability of nitrogen loading within farms and from farm to farm for the same crop type. In this study, we developed a Bayesian regression model that allowed us to estimate land-use-specific groundwater nitrogen loading rate probability distributions for 15 crop and land-use groups based on a database of recent nitrate measurements from 2149 private wells in the Central Valley. The water and natural, rice, and alfalfa and pasture groups had the lowest median estimated nitrogen loading rates, each with a median estimate below 5 kg N ha-1 yr-1. Confined animal feeding operations (dairies) and citrus and subtropical crops had the greatest median estimated nitrogen loading rates at approximately 269 and 65 kg N ha-1 yr-1, respectively. In general, our probability-based estimates compare favorably with previous direct measurements and with mass-balance-based estimates of nitrogen loading. Nitrogen mass-balance-based estimates are larger than our groundwater nitrate derived estimates for manured and nonmanured forage, nuts, cotton, tree fruit, and rice crops. These discrepancies are thought to be due to groundwater age mixing, dilution from infiltrating river water, or denitrification between the time when nitrogen leaves the root zone (point of reference for mass-balance-derived loading) and the time and location of groundwater measurement.

  4. Using parentage analysis to estimate rates of straying and homing in Chinook salmon (Oncorhynchus tshawytscha).

    PubMed

    Ford, Michael J; Murdoch, Andrew; Hughes, Michael

    2015-03-01

    We used parentage analysis based on microsatellite genotypes to measure rates of homing and straying of Chinook salmon (Oncorhynchus tshawytscha) among five major spawning tributaries within the Wenatchee River, Washington. On the basis of analysis of 2248 natural-origin and 11594 hatchery-origin fish, we estimated that the rate of homing to natal tributaries by natural-origin fish ranged from 0% to 99% depending on the tributary. Hatchery-origin fish released in one of the five tributaries homed to that tributary at a far lower rate than the natural-origin fish (71% compared to 96%). For hatchery-released fish, stray rates based on parentage analysis were consistent with rates estimated using physical tag recoveries. Stray rates among major spawning tributaries were generally higher than stray rates of tagged fish to areas outside of the Wenatchee River watershed. Within the Wenatchee watershed, rates of straying by natural-origin fish were significantly affected by spawning tributary and by parental origin: progeny of naturally spawning hatchery-produced fish strayed at significantly higher rates than progeny whose parents were themselves of natural origin. Notably, none of the 170 offspring that were products of mating by two natural-origin fish strayed from their natal tributary. Indirect estimates of gene flow based on FST statistics were correlated with but higher than the estimates from the parentage data. Tributary-specific estimates of effective population size were also correlated with the number of spawners in each tributary. Published [2015]. This article is a U.S. Government work and is in the public domain in the USA.

  5. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  6. Users guide for STHARVEST: software to estimate the cost of harvesting small timber.

    Treesearch

    Roger D. Fight; Xiaoshan Zhang; Bruce R. Hartsough

    2003-01-01

    The STHARVEST computer application is Windows-based, public-domain software used to estimate costs for harvesting small-diameter stands or the small-diameter component of a mixed-sized stand. The equipment production rates were developed from existing studies. Equipment operating cost rates were based on November 1998 prices for new equipment and wage rates for the...

  7. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  8. Estimation of longitudinal force, lateral vehicle speed and yaw rate for four-wheel independent driven electric vehicles

    NASA Astrophysics Data System (ADS)

    Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong

    2018-02-01

    Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.

  9. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  10. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  11. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  12. Inhalation and Ingestion Intakes with Associated Dose Estimates for Level II and Level III Personnel Using Capstone Study Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szrom, Fran; Falo, Gerald A.; Lodde, Gordon M.

    2009-03-01

    Depleted uranium (DU) intake rates and subsequent dose rates were estimated for personnel entering armored combat vehicles perforated with DU penetrators (level II and level III personnel) using data generated during the Capstone Depleted Uranium (DU) Aerosol Study. Inhalation intake rates and associated dose rates were estimated from cascade impactors worn by sample recovery personnel and from cascade impactors that served as area monitors. Ingestion intake rates and associated dose rates were estimated from cotton gloves worn by sample recovery personnel and from wipe test samples from the interior of vehicles perforated with large caliber DU munitions. The mean DUmore » inhalation intake rate for level II personnel ranged from 0.447 mg h-1 based on breathing zone monitor data (in and around a perforated vehicle) to 14.5 mg h-1 based on area monitor data (in a perforated vehicle). The mean DU ingestion intake rate for level II ranged from 4.8 mg h-1 to 38.9 mg h-1 based on the wipe test data including surface to glove transfer factors derived from the Capstone data. Based on glove contamination data, the mean DU ingestion intake rates for level II and level III personnel were 10.6 mg h-1 was and 1.78 mg h-1, respectively. Effective dose rates and peak kidney uranium concentration rates were calculated based on the intake rates. The peak kidney uranium concentration rate cannot be multiplied by the total exposure duration when multiple intakes occur because uranium will clear from the kidney between the exposures.« less

  13. Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80

    NASA Astrophysics Data System (ADS)

    Pruet, Jason; Fuller, George M.

    2003-11-01

    We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.

  14. Direct estimate of the spontaneous germ line mutation rate in African green monkeys.

    PubMed

    Pfeifer, Susanne P

    2017-12-01

    Here, I provide the first direct estimate of the spontaneous mutation rate in an Old World monkey, using a seven individual, three-generation pedigree of African green monkeys. Eight de novo mutations were identified within ∼1.5 Gbp of accessible genome, corresponding to an estimated point mutation rate of 0.94 × 10 -8 per site per generation, suggesting an effective population size of ∼12000 for the species. This estimation represents a significant improvement in our knowledge of the population genetics of the African green monkey, one of the most important nonhuman primate models in biomedical research. Furthermore, by comparing mutation rates in Old World monkeys with the only other direct estimates in primates to date-humans and chimpanzees-it is possible to uniquely address how mutation rates have evolved over longer time scales. While the estimated spontaneous mutation rate for African green monkeys is slightly lower than the rate of 1.2 × 10 -8 per base pair per generation reported in chimpanzees, it is similar to the lower range of rates of 0.96 × 10 -8 -1.28 × 10 -8 per base pair per generation recently estimated from whole genome pedigrees in humans. This result suggests a long-term constraint on mutation rate that is quite different from similar evidence pertaining to recombination rate evolution in primates. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  15. NATURAL VOLATILE ORGANIC COMPOUND EMISSION RATE ESTIMATES FOR U.S. WOODLAND LANDSCAPES

    EPA Science Inventory

    Volatile organic compound (VOC) emission rate factors are estimated for 49 tree genera based on a review of foliar emission rate measurements. oliar VOC emissions are grouped into three categories: isoprene, monoterpenes and other VOC'S. ypical emission rates at a leaf temperatur...

  16. Combining Decision Rules from Classification Tree Models and Expert Assessment to Estimate Occupational Exposure to Diesel Exhaust for a Case-Control Study

    PubMed Central

    Friesen, Melissa C.; Wheeler, David C.; Vermeulen, Roel; Locke, Sarah J.; Zaebst, Dennis D.; Koutros, Stella; Pronk, Anjoeka; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Malats, Nuria; Schwenn, Molly; Johnson, Alison; Armenti, Karla R.; Rothman, Nathanial; Stewart, Patricia A.; Kogevinas, Manolis; Silverman, Debra T.

    2016-01-01

    Objectives: To efficiently and reproducibly assess occupational diesel exhaust exposure in a Spanish case-control study, we examined the utility of applying decision rules that had been extracted from expert estimates and questionnaire response patterns using classification tree (CT) models from a similar US study. Methods: First, previously extracted CT decision rules were used to obtain initial ordinal (0–3) estimates of the probability, intensity, and frequency of occupational exposure to diesel exhaust for the 10 182 jobs reported in a Spanish case-control study of bladder cancer. Second, two experts reviewed the CT estimates for 350 jobs randomly selected from strata based on each CT rule’s agreement with the expert ratings in the original study [agreement rate, from 0 (no agreement) to 1 (perfect agreement)]. Their agreement with each other and with the CT estimates was calculated using weighted kappa (κ w) and guided our choice of jobs for subsequent expert review. Third, an expert review comprised all jobs with lower confidence (low-to-moderate agreement rates or discordant assignments, n = 931) and a subset of jobs with a moderate to high CT probability rating and with moderately high agreement rates (n = 511). Logistic regression was used to examine the likelihood that an expert provided a different estimate than the CT estimate based on the CT rule agreement rates, the CT ordinal rating, and the availability of a module with diesel-related questions. Results: Agreement between estimates made by two experts and between estimates made by each of the experts and the CT estimates was very high for jobs with estimates that were determined by rules with high CT agreement rates (κ w: 0.81–0.90). For jobs with estimates based on rules with lower agreement rates, moderate agreement was observed between the two experts (κ w: 0.42–0.67) and poor-to-moderate agreement was observed between the experts and the CT estimates (κ w: 0.09–0.57). In total, the expert review of 1442 jobs changed 156 probability estimates, 128 intensity estimates, and 614 frequency estimates. The expert was more likely to provide a different estimate when the CT rule agreement rate was <0.8, when the CT ordinal ratings were low to moderate, or when a module with diesel questions was available. Conclusions: Our reliability assessment provided important insight into where to prioritize additional expert review; as a result, only 14% of the jobs underwent expert review, substantially reducing the exposure assessment burden. Overall, we found that we could efficiently, reproducibly, and reliably apply CT decision rules from one study to assess exposure in another study. PMID:26732820

  17. Quantifying the major mechanisms of recent gene duplications in the human and mouse genomes: a novel strategy to estimate gene duplication rates

    PubMed Central

    Pan, Deng; Zhang, Liqing

    2007-01-01

    Background The rate of gene duplication is an important parameter in the study of evolution, but the influence of gene conversion and technical problems have confounded previous attempts to provide a satisfying estimate. We propose a new strategy to estimate the rate that involves separate quantification of the rates of two different mechanisms of gene duplication and subsequent combination of the two rates, based on their respective contributions to the overall gene duplication rate. Results Previous estimates of gene duplication rates are based on small gene families. Therefore, to assess the applicability of this to families of all sizes, we looked at both two-copy gene families and the entire genome. We studied unequal crossover and retrotransposition, and found that these mechanisms of gene duplication are largely independent and account for a substantial amount of duplicated genes. Unequal crossover contributed more to duplications in the entire genome than retrotransposition did, but this contribution was significantly less in two-copy gene families, and duplicated genes arising from this mechanism are more likely to be retained. Combining rates of duplication using the two mechanisms, we estimated the overall rates to be from approximately 0.515 to 1.49 × 10-3 per gene per million years in human, and from approximately 1.23 to 4.23 × 10-3 in mouse. The rates estimated from two-copy gene families are always lower than those from the entire genome, and so it is not appropriate to use small families to estimate the rate for the entire genome. Conclusion We present a novel strategy for estimating gene duplication rates. Our results show that different mechanisms contribute differently to the evolution of small and large gene families. PMID:17683522

  18. Soil ingestion rates for children under 3 years old in Taiwan.

    PubMed

    Chien, Ling-Chu; Tsou, Ming-Chien; Hsi, Hsing-Cheng; Beamer, Paloma; Bradham, Karen; Hseu, Zeng-Yei; Jien, Shih-Hao; Jiang, Chuen-Bin; Dang, Winston; Özkaynak, Halûk

    2017-01-01

    Soil and dust ingestion rates by children are among the most critical exposure factors in determining risks to children from exposures to environmental contaminants in soil and dust. We believe this is the first published soil ingestion study for children in Taiwan using tracer element methodology. In this study, 66 children under 3 years of age were enrolled from Taiwan. Three days of fecal samples and a 24-h duplicate food sample were collected. The soil and household dust samples were also collected from children's homes. Soil ingestion rates were estimated based on silicon (Si) and titanium (Ti). The average soil ingestion rates were 9.6±19.2 mg/day based on Si as a tracer. The estimated soil ingestion rates based on Si did not have statistically significant differences by children's age and gender, although the average soil ingestion rates clearly increased as a function of children's age category. The estimated soil ingestion rates based on Si was significantly and positively correlated with the sum of indoor and outdoor hand-to-mouth frequency rates. The average soil ingestion rates based on Si were generally lower than the results from previous studies for the US children. Ti may not be a suitable tracer for estimating soil ingestion rates in Taiwan because the Ti dioxide is a common additive in food. To the best of our knowledge, this is the first study that investigated the correlations between soil ingestion rates and mouthing behaviors in Taiwan or other parts of Asia. It is also the first study that could compare available soil ingestion data from different countries and/or different cultures. The hand-to-mouth frequency and health habits are important to estimate the soil ingestion exposure for children. The results in this study are particularly important when assessing children's exposure and potential health risk from nearby contaminated soils in Taiwan.

  19. Hospitalization costs of severe bacterial pneumonia in children: comparative analysis considering different costing methods

    PubMed Central

    Nunes, Sheila Elke Araujo; Minamisava, Ruth; Vieira, Maria Aparecida da Silva; Itria, Alexander; Pessoa, Vicente Porfirio; de Andrade, Ana Lúcia Sampaio Sgambatti; Toscano, Cristiana Maria

    2017-01-01

    ABSTRACT Objective To determine and compare hospitalization costs of bacterial community-acquired pneumonia cases via different costing methods under the Brazilian Public Unified Health System perspective. Methods Cost-of-illness study based on primary data collected from a sample of 59 children aged between 28 days and 35 months and hospitalized due to bacterial pneumonia. Direct medical and non-medical costs were considered and three costing methods employed: micro-costing based on medical record review, micro-costing based on therapeutic guidelines and gross-costing based on the Brazilian Public Unified Health System reimbursement rates. Costs estimates obtained via different methods were compared using the Friedman test. Results Cost estimates of inpatient cases of severe pneumonia amounted to R$ 780,70/$Int. 858.7 (medical record review), R$ 641,90/$Int. 706.90 (therapeutic guidelines) and R$ 594,80/$Int. 654.28 (Brazilian Public Unified Health System reimbursement rates). Costs estimated via micro-costing (medical record review or therapeutic guidelines) did not differ significantly (p=0.405), while estimates based on reimbursement rates were significantly lower compared to estimates based on therapeutic guidelines (p<0.001) or record review (p=0.006). Conclusion Brazilian Public Unified Health System costs estimated via different costing methods differ significantly, with gross-costing yielding lower cost estimates. Given costs estimated by different micro-costing methods are similar and costing methods based on therapeutic guidelines are easier to apply and less expensive, this method may be a valuable alternative for estimation of hospitalization costs of bacterial community-acquired pneumonia in children. PMID:28767921

  20. Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?

    PubMed

    Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A

    2015-06-01

    Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.

  1. Incidence of breast cancer and estimates of overdiagnosis after the initiation of a population-based mammography screening program.

    PubMed

    Coldman, Andrew; Phillips, Norm

    2013-07-09

    There has been growing interest in the overdiagnosis of breast cancer as a result of mammography screening. We report incidence rates in British Columbia before and after the initiation of population screening and provide estimates of overdiagnosis. We obtained the numbers of breast cancer diagnoses from the BC Cancer Registry and screening histories from the Screening Mammography Program of BC for women aged 30-89 years between 1970 and 2009. We calculated age-specific rates of invasive breast cancer and ductal carcinoma in situ. We compared these rates by age, calendar period and screening participation. We obtained 2 estimates of overdiagnosis from cumulative cancer rates among women between the ages of 40 and 89 years: the first estimate compared participants with nonparticipants; the second estimate compared observed and predicted population rates. We calculated participation-based estimates of overdiagnosis to be 5.4% for invasive disease alone and 17.3% when ductal carcinoma in situ was included. The corresponding population-based estimates were -0.7% and 6.7%. Participants had higher rates of invasive cancer and ductal carcinoma in situ than nonparticipants but lower rates after screening stopped. Population incidence rates for invasive cancer increased after 1980; by 2009, they had returned to levels similar to those of the 1970s among women under 60 years of age but remained elevated among women 60-79 years old. Rates of ductal carcinoma in situ increased in all age groups. The extent of overdiagnosis of invasive cancer in our study population was modest and primarily occurred among women over the age of 60 years. However, overdiagnosis of ductal carcinoma in situ was elevated for all age groups. The estimation of overdiagnosis from observational data is complex and subject to many influences. The use of mammography screening in older women has an increased risk of overdiagnosis, which should be considered in screening decisions.

  2. Incidence and admission rates for severe malaria and their impact on mortality in Africa.

    PubMed

    Camponovo, Flavia; Bever, Caitlin A; Galactionova, Katya; Smith, Thomas; Penny, Melissa A

    2017-01-03

    Appropriate treatment of life-threatening Plasmodium falciparum malaria requires in-patient care. Although the proportion of severe cases accessing in-patient care in endemic settings strongly affects overall case fatality rates and thus disease burden, this proportion is generally unknown. At present, estimates of malaria mortality are driven by prevalence or overall clinical incidence data, ignoring differences in case fatality resulting from variations in access. Consequently, the overall impact of preventive interventions on disease burden have not been validly compared with those of improvements in access to case management or its quality. Using a simulation-based approach, severe malaria admission rates and the subsequent severe malaria disease and mortality rates for 41 malaria endemic countries of sub-Saharan Africa were estimated. Country differences in transmission and health care settings were captured by use of high spatial resolution data on demographics and falciparum malaria prevalence, as well as national level estimates of effective coverage of treatment for uncomplicated malaria. Reported and modelled estimates of cases, admissions and malaria deaths from the World Malaria Report, along with predicted burden from simulations, were combined to provide revised estimates of access to in-patient care and case fatality rates. There is substantial variation between countries' in-patient admission rates and estimated levels of case fatality rates. It was found that for many African countries, most patients admitted for in-patient treatment would not meet strict criteria for severe disease and that for some countries only a small proportion of the total severe cases are admitted. Estimates are highly sensitive to the assumed community case fatality rates. Re-estimation of national level malaria mortality rates suggests that there is substantial burden attributable to inefficient in-patient access and treatment of severe disease. The model-based methods proposed here offer a standardized approach to estimate the numbers of severe malaria cases and deaths based on national level reporting, allowing for coverage of both curative and preventive interventions. This makes possible direct comparisons of the potential benefits of scaling-up either category of interventions. The profound uncertainties around these estimates highlight the need for better data.

  3. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  4. Ocean mixing beneath Pine Island Glacier ice shelf, West Antarctica

    NASA Astrophysics Data System (ADS)

    Kimura, Satoshi; Jenkins, Adrian; Dutrieux, Pierre; Forryan, Alexander; Naveira Garabato, Alberto C.; Firing, Yvonne

    2016-12-01

    Ice shelves around Antarctica are vulnerable to an increase in ocean-driven melting, with the melt rate depending on ocean temperature and the strength of flow inside the ice-shelf cavities. We present measurements of velocity, temperature, salinity, turbulent kinetic energy dissipation rate, and thermal variance dissipation rate beneath Pine Island Glacier ice shelf, West Antarctica. These measurements were obtained by CTD, ADCP, and turbulence sensors mounted on an Autonomous Underwater Vehicle (AUV). The highest turbulent kinetic energy dissipation rate is found near the grounding line. The thermal variance dissipation rate increases closer to the ice-shelf base, with a maximum value found ˜0.5 m away from the ice. The measurements of turbulent kinetic energy dissipation rate near the ice are used to estimate basal melting of the ice shelf. The dissipation-rate-based melt rate estimates is sensitive to the stability correction parameter in the linear approximation of universal function of the Monin-Obukhov similarity theory for stratified boundary layers. We argue that our estimates of basal melting from dissipation rates are within a range of previous estimates of basal melting.

  5. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  6. Elimination Rates of Dioxin Congeners in Former Chlorophenol Workers from Midland, Michigan

    PubMed Central

    Collins, James J.; Bodner, Kenneth M.; Wilken, Michael; Bodnar, Catherine M.

    2012-01-01

    Background: Exposure reconstructions and risk assessments for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and other dioxins rely on estimates of elimination rates. Limited data are available on elimination rates for congeners other than TCDD. Objectives: We estimated apparent elimination rates using a simple first-order one-compartment model for selected dioxin congeners based on repeated blood sampling in a previously studied population. Methods: Blood samples collected from 56 former chlorophenol workers in 2004–2005 and again in 2010 were analyzed for dioxin congeners. We calculated the apparent elimination half-life in each individual for each dioxin congener and examined factors potentially influencing elimination rates and the impact of estimated ongoing background exposures on rate estimates. Results: Mean concentrations of all dioxin congeners in the sampled participants declined between sampling times. Median apparent half-lives of elimination based on changes in estimated mass in the body were generally consistent with previous estimates and ranged from 6.8 years (1,2,3,7,8,9-hexachlorodibenzo-p-dioxin) to 11.6 years (pentachlorodibenzo-p-dioxin), with a composite half-life of 9.3 years for TCDD toxic equivalents. None of the factors examined, including age, smoking status, body mass index or change in body mass index, initial measured concentration, or chloracne diagnosis, was consistently associated with the estimated elimination rates in this population. Inclusion of plausible estimates of ongoing background exposures decreased apparent half-lives by approximately 10%. Available concentration-dependent toxicokinetic models for TCDD underpredicted observed elimination rates for concentrations < 100 ppt. Conclusions: The estimated elimination rates from this relatively large serial sampling study can inform occupational and environmental exposure and serum evaluations for dioxin compounds. PMID:23063871

  7. Self-rated health: small area large area comparisons amongst older adults at the state, district and sub-district level in India.

    PubMed

    Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi

    2014-03-01

    We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.

  8. Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study

    DTIC Science & Technology

    2015-11-05

    used to develop more detailed predictive risk models as well as strategies for preventing specific types of MVCs. Systematic Review of Evidence... used to estimate rates of accident-related injuries more generally,9 but not with specific reference to MVCs. For the present report, rates of...precise rate estimates based on person-years rather than active duty strength, (e) multivariable effects of specific risk /protective factors after

  9. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  10. Data-Rate Estimation for Autonomous Receiver Operation

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.; Simon, M. K.

    2005-01-01

    In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.

  11. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  12. The rate and character of spontaneous mutation in an RNA virus.

    PubMed Central

    Malpica, José M; Fraile, Aurora; Moreno, Ignacio; Obies, Clara I; Drake, John W; García-Arenal, Fernando

    2002-01-01

    Estimates of spontaneous mutation rates for RNA viruses are few and uncertain, most notably due to their dependence on tiny mutation reporter sequences that may not well represent the whole genome. We report here an estimate of the spontaneous mutation rate of tobacco mosaic virus using an 804-base cognate mutational target, the viral MP gene that encodes the movement protein (MP). Selection against newly arising mutants was countered by providing MP function from a transgene. The estimated genomic mutation rate was on the lower side of the range previously estimated for lytic animal riboviruses. We also present the first unbiased riboviral mutational spectrum. The proportion of base substitutions is the same as that in a retrovirus but is lower than that in most DNA-based organisms. Although the MP mutant frequency was 0.02-0.05, 35% of the sequenced mutants contained two or more mutations. Therefore, the mutation process in populations of TMV and perhaps of riboviruses generally differs profoundly from that in populations of DNA-based microbes and may be strongly influenced by a subpopulation of mutator polymerases. PMID:12524327

  13. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    NASA Astrophysics Data System (ADS)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  14. National Hospital Discharge Survey: 2001 annual summary with detailed diagnosis and procedure data.

    PubMed

    Kozak, Lola Jean; Owings, Maria F; Hall, Margaret J

    2004-06-01

    This report presents 2001 national estimates and selected trend data on the use of non-Federal short-stay hospitals in the United States. Estimates are provided by selected patient and hospital characteristics, diagnoses, and surgical and nonsurgical procedures performed. Admission source and type, collected for the first time in the 2001 National Hospital Discharge Survey, are shown. The estimates are based on data collected through the National Hospital Discharge Survey (NHDS). The survey has been conducted annually since 1965. In 2001, data were collected for approximately 330,000 discharges. Of the 477 eligible non-Federal short-stay hospitals in the sample, 448 (94 percent) responded to the survey. Estimates of diagnoses and procedures are presented according to International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code numbers. Rates are computed with 2001 population estimates based on the 2000 census. The appendix includes a comparison of rates computed with 1990 and 2000 census-based population estimates. An estimated 32.7 million inpatients were discharged from non-Federal short-stay hospitals in 2001. They used 159.4 million days of care and had an average length of stay of 4.9 days. Common first-listed discharge diagnoses included delivery, psychoses, pneumonia, malignant neoplasm, and coronary atherosclerosis. Males had higher rates for procedures such as cardiac catheterization and coronary artery bypass graft, and females had higher rates for procedures such as cholecystectomy and total knee replacement. The rates of all cesarean deliveries, primary and repeat, rose from 1995 to 2001; the rate of vaginal birth after cesarean delivery dropped 37 percent during this period.

  15. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    PubMed

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  16. An assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue, Ireland

    NASA Astrophysics Data System (ADS)

    Harrington, Seán T.; Harrington, Joseph R.

    2013-03-01

    This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.

  17. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II: Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2006-01-01

    Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.

  18. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 2; Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2004-01-01

    Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.

  19. Minimax estimation of qubit states with Bures risk

    NASA Astrophysics Data System (ADS)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  20. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  1. New, national bottom-up estimate for tree-based biological ...

    EPA Pesticide Factsheets

    Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating

  2. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  3. Growth, chamber building rate and reproduction time of Palaeonummulites venosus under natural conditions.

    NASA Astrophysics Data System (ADS)

    Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2017-04-01

    Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.

  4. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  5. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  6. Estimating fatality rates in occupational light vehicle users using vehicle registration and crash data.

    PubMed

    Stuckey, Rwth; LaMontagne, Anthony D; Glass, Deborah C; Sim, Malcolm R

    2010-04-01

    To estimate occupational light vehicle (OLV) fatality numbers using vehicle registration and crash data and compare these with previous estimates based on workers' compensation data. New South Wales (NSW) Roads and Traffic Authority (RTA) vehicle registration and crash data were obtained for 2004. NSW is the only Australian jurisdiction with mandatory work-use registration, which was used as a proxy for work-relatedness. OLV fatality rates based on registration data as the denominator were calculated and comparisons made with published 2003/04 fatalities based on workers' compensation data. Thirty-four NSW RTA OLV-user fatalities were identified, a rate of 4.5 deaths per 100,000 organisationally registered OLV, whereas the Australian Safety and Compensation Council (ASCC), reported 28 OLV deaths Australia-wide. More OLV user fatalities were identified from vehicle registration-based data than those based on workers' compensation estimates and the data are likely to provide an improved estimate of fatalities specific to OLV use. OLV-use is an important cause of traumatic fatalities that would be better identified through the use of vehicle-registration data, which provides a stronger evidence base from which to develop policy responses. © 2010 The Authors. Journal Compilation © 2010 Public Health Association of Australia.

  7. A trigger-based design for evaluating the safety of in utero antiretroviral exposure in uninfected children of human immunodeficiency virus-infected mothers.

    PubMed

    Williams, Paige L; Seage, George R; Van Dyke, Russell B; Siberry, George K; Griner, Raymond; Tassiopoulos, Katherine; Yildirim, Cenk; Read, Jennifer S; Huo, Yanling; Hazra, Rohan; Jacobson, Denise L; Mofenson, Lynne M; Rich, Kenneth

    2012-05-01

    The Pediatric HIV/AIDS Cohort Study's Surveillance Monitoring of ART Toxicities Study is a prospective cohort study conducted at 22 US sites between 2007 and 2011 that was designed to evaluate the safety of in utero antiretroviral drug exposure in children not infected with human immunodeficiency virus who were born to mothers who were infected. This ongoing study uses a "trigger-based" design; that is, initial assessments are conducted on all children, and only those meeting certain thresholds or "triggers" undergo more intensive evaluations to determine whether they have had an adverse event (AE). The authors present the estimated rates of AEs for each domain of interest in the Surveillance Monitoring of ART Toxicities Study. They also evaluated the efficiency of this trigger-based design for estimating AE rates and for testing associations between in utero exposures to antiretroviral drugs and AEs. The authors demonstrate that estimated AE rates from the trigger-based design are unbiased after correction for the sensitivity of the trigger for identifying AEs. Even without correcting for bias based on trigger sensitivity, the trigger approach is generally more efficient for estimating AE rates than is evaluating a random sample of the same size. Minor losses in efficiency when comparing AE rates between persons exposed and unexposed in utero to particular antiretroviral drugs or drug classes were observed under most scenarios.

  8. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  9. The influence of taxon sampling on Bayesian divergence time inference under scenarios of rate heterogeneity among lineages.

    PubMed

    Soares, André E R; Schrago, Carlos G

    2015-01-07

    Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Genital Chlamydia Prevalence in Europe and Non-European High Income Countries: Systematic Review and Meta-Analysis

    PubMed Central

    Redmond, Shelagh M.; Alexander-Kisslig, Karin; Woodhall, Sarah C.; van den Broek, Ingrid V. F.; van Bergen, Jan; Ward, Helen; Uusküla, Anneli; Herrmann, Björn; Andersen, Berit; Götz, Hannelore M.; Sfetcu, Otilia; Low, Nicola

    2015-01-01

    Background Accurate information about the prevalence of Chlamydia trachomatis is needed to assess national prevention and control measures. Methods We systematically reviewed population-based cross-sectional studies that estimated chlamydia prevalence in European Union/European Economic Area (EU/EEA) Member States and non-European high income countries from January 1990 to August 2012. We examined results in forest plots, explored heterogeneity using the I2 statistic, and conducted random effects meta-analysis if appropriate. Meta-regression was used to examine the relationship between study characteristics and chlamydia prevalence estimates. Results We included 25 population-based studies from 11 EU/EEA countries and 14 studies from five other high income countries. Four EU/EEA Member States reported on nationally representative surveys of sexually experienced adults aged 18–26 years (response rates 52–71%). In women, chlamydia point prevalence estimates ranged from 3.0–5.3%; the pooled average of these estimates was 3.6% (95% CI 2.4, 4.8, I2 0%). In men, estimates ranged from 2.4–7.3% (pooled average 3.5%; 95% CI 1.9, 5.2, I2 27%). Estimates in EU/EEA Member States were statistically consistent with those in other high income countries (I2 0% for women, 6% for men). There was statistical evidence of an association between survey response rate and estimated chlamydia prevalence; estimates were higher in surveys with lower response rates, (p = 0.003 in women, 0.018 in men). Conclusions Population-based surveys that estimate chlamydia prevalence are at risk of participation bias owing to low response rates. Estimates obtained in nationally representative samples of the general population of EU/EEA Member States are similar to estimates from other high income countries. PMID:25615574

  11. Waiting for the Bus: When Base-Rates Refuse to Be Neglected

    ERIC Educational Resources Information Center

    Teigen, Karl Halvor; Keren, Gideon

    2007-01-01

    The paper reports the results from 16 versions of a simple probability estimation task, where probability estimates derived from base-rate information have to be modified by case knowledge. In the bus problem [adapted from Falk, R., Lipson, A., & Konold, C. (1994), the ups and downs of the hope function in a fruitless search. In G. Wright & P.…

  12. A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data

    ERIC Educational Resources Information Center

    Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert

    2007-01-01

    A new measure for reliability of a rating scale is introduced, based on the classical definition of reliability, as the ratio of the true score variance and the total variance. Clinical trial data can be employed to estimate the reliability of the scale in use, whenever repeated measurements are taken. The reliability is estimated from the…

  13. Low-Cost 3-D Flow Estimation of Blood With Clutter.

    PubMed

    Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali

    2017-05-01

    Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.

  14. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  15. 14 CFR Appendix A to Part 1215 - Estimated Service Rates in 1997 Dollars for TDRSS Standard Services (Based on NASA Escalation...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Services (Based on NASA Escalation Estimate) Time: Project conceptualization (at least two years before... TDRSS Standard Services (Based on NASA Escalation Estimate) A Appendix A to Part 1215 Aeronautics and... the service requirements by NASA Headquarters, communications for the reimbursable development of a...

  16. 14 CFR Appendix A to Part 1215 - Estimated Service Rates in 1997 Dollars for TDRSS Standard Services (Based on NASA Escalation...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Services (Based on NASA Escalation Estimate) Time: Project conceptualization (at least two years before... TDRSS Standard Services (Based on NASA Escalation Estimate) A Appendix A to Part 1215 Aeronautics and... the service requirements by NASA Headquarters, communications for the reimbursable development of a...

  17. Dynamically heterogenous partitions and phylogenetic inference: an evaluation of analytical strategies with cytochrome b and ND6 gene sequences in cranes.

    PubMed

    Krajewski, C; Fain, M G; Buckley, L; King, D G

    1999-11-01

    ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.

  18. Estimating survival rates with time series of standing age‐structure data

    USGS Publications Warehouse

    Udevitz, Mark S.; Gogan, Peter J.

    2012-01-01

    It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.

  19. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  20. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  1. Fixed-rate layered multicast congestion control

    NASA Astrophysics Data System (ADS)

    Bing, Zhang; Bing, Yuan; Zengji, Liu

    2006-10-01

    A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.

  2. Toward Hypertension Prediction Based on PPG-Derived HRV Signals: a Feasibility Study.

    PubMed

    Lan, Kun-Chan; Raknim, Paweeya; Kao, Wei-Fong; Huang, Jyh-How

    2018-04-21

    Heart rate variability (HRV) is often used to assess the risk of cardiovascular disease, and data on this can be obtained via electrocardiography (ECG). However, collecting heart rate data via photoplethysmography (PPG) is now a lot easier. We investigate the feasibility of using the PPG-based heart rate to estimate HRV and predict diseases. We obtain three months of PPG-based heart rate data from subjects with and without hypertension, and calculate the HRV based on various forms of time and frequency domain analysis. We then apply a data mining technique to this estimated HRV data, to see if it is possible to correctly identify patients with hypertension. We use six HRV parameters to predict hypertension, and find SDNN has the best predictive power. We show that early disease prediction is possible through collecting one's PPG-based heart rate information.

  3. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR.

  4. Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability

    NASA Astrophysics Data System (ADS)

    Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko

    In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.

  5. The problem of estimating recent genetic connectivity in a changing world.

    PubMed

    Samarasin, Pasan; Shuter, Brian J; Wright, Stephen I; Rodd, F Helen

    2017-02-01

    Accurate understanding of population connectivity is important to conservation because dispersal can play an important role in population dynamics, microevolution, and assessments of extirpation risk and population rescue. Genetic methods are increasingly used to infer population connectivity because advances in technology have made them more advantageous (e.g., cost effective) relative to ecological methods. Given the reductions in wildlife population connectivity since the Industrial Revolution and more recent drastic reductions from habitat loss, it is important to know the accuracy of and biases in genetic connectivity estimators when connectivity has declined recently. Using simulated data, we investigated the accuracy and bias of 2 common estimators of migration (movement of individuals among populations) rate. We focused on the timing of the connectivity change and the magnitude of that change on the estimates of migration by using a coalescent-based method (Migrate-n) and a disequilibrium-based method (BayesAss). Contrary to expectations, when historically high connectivity had declined recently: (i) both methods over-estimated recent migration rates; (ii) the coalescent-based method (Migrate-n) provided better estimates of recent migration rate than the disequilibrium-based method (BayesAss); (iii) the coalescent-based method did not accurately reflect long-term genetic connectivity. Overall, our results highlight the problems with comparing coalescent and disequilibrium estimates to make inferences about the effects of recent landscape change on genetic connectivity among populations. We found that contrasting these 2 estimates to make inferences about genetic-connectivity changes over time could lead to inaccurate conclusions. © 2016 Society for Conservation Biology.

  6. Indirectly estimated absolute lung cancer mortality rates by smoking status and histological type based on a systematic review

    PubMed Central

    2013-01-01

    Background National smoking-specific lung cancer mortality rates are unavailable, and studies presenting estimates are limited, particularly by histology. This hinders interpretation. We attempted to rectify this by deriving estimates indirectly, combining data from national rates and epidemiological studies. Methods We estimated study-specific absolute mortality rates and variances by histology and smoking habit (never/ever/current/former) based on relative risk estimates derived from studies published in the 20th century, coupled with WHO mortality data for age 70–74 for the relevant country and period. Studies with populations grossly unrepresentative nationally were excluded. 70–74 was chosen based on analyses of large cohort studies presenting rates by smoking and age. Variations by sex, period and region were assessed by meta-analysis and meta-regression. Results 148 studies provided estimates (Europe 59, America 54, China 22, other Asia 13), 54 providing estimates by histology (squamous cell carcinoma, adenocarcinoma). For all smoking habits and lung cancer types, mortality rates were higher in males, the excess less evident for never smokers. Never smoker rates were clearly highest in China, and showed some increasing time trend, particularly for adenocarcinoma. Ever smoker rates were higher in parts of Europe and America than in China, with the time trend very clear, especially for adenocarcinoma. Variations by time trend and continent were clear for current smokers (rates being higher in Europe and America than Asia), but less clear for former smokers. Models involving continent and trend explained much variability, but non-linearity was sometimes seen (with rates lower in 1991–99 than 1981–90), and there was regional variation within continent (with rates in Europe often high in UK and low in Scandinavia, and higher in North than South America). Conclusions The indirect method may be questioned, because of variations in definition of smoking and lung cancer type in the epidemiological database, changes over time in diagnosis of lung cancer types, lack of national representativeness of some studies, and regional variation in smoking misclassification. However, the results seem consistent with the literature, and provide additional information on variability by time and region, including evidence of a rise in never smoker adenocarcinoma rates relative to squamous cell carcinoma rates. PMID:23570286

  7. Integrating data from multiple sources for insights into demographic processes: Simulation studies and proof of concept for hierarchical change-in-ratio models.

    PubMed

    Nilsen, Erlend B; Strand, Olav

    2018-01-01

    We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.

  8. A New Geological Slip Rate Estimate for the Calico Fault, Eastern California: Implications for Geodetic Versus Geologic Rate Estimates in the Eastern California Shear Zone

    NASA Astrophysics Data System (ADS)

    Wetmore, P. H.; Xie, S.; Gallant, E.; Owen, L. A.; Dixon, T. H.

    2017-12-01

    Fault slip rate is fundamental to accurate seismic hazard assessment. In the Mojave Desert section of the Eastern California Shear Zone previous studies have suggested a discrepancy between short-term geodetic and long-term geologic slip rate estimates. Understanding the origin of this discrepancy could lead to better understanding of stress evolution, and improve earthquake hazard estimates in general. We measured offsets in alluvial fans along the Calico fault near Newberry Springs, California, and used exposure age dating based on the cosmogenic nuclide 10Be to date the offset landforms. We derive a mean slip rate of 3.6 mm/yr, representing an average over the last few hundred thousand years, significantly faster than previous estimates. Considering numerous faults in the Mojave Desert and limited geologic slip rate estimates, it is premature to claim a geologic versus geodetic "discrepancy" for the ECSZ. More slip rate data, from all faults with the ECSZ, are needed to provide a statistically meaningful assessment of the geologic rates for each of the faults comprising the ECSZ.

  9. Generalizability of Evidence-Based Assessment Recommendations for Pediatric Bipolar Disorder

    PubMed Central

    Jenkins, Melissa M.; Youngstrom, Eric A.; Youngstrom, Jennifer Kogos; Feeny, Norah C.; Findling, Robert L.

    2013-01-01

    Bipolar disorder is frequently clinically diagnosed in youths who do not actually satisfy DSM-IV criteria, yet cases that would satisfy full DSM-IV criteria are often undetected clinically. Evidence-based assessment methods that incorporate Bayesian reasoning have demonstrated improved diagnostic accuracy, and consistency; however, their clinical utility is largely unexplored. The present study examines the effectiveness of promising evidence-based decision-making compared to the clinical gold standard. Participants were 562 youth, ages 5-17 and predominantly African American, drawn from a community mental health clinic. Research diagnoses combined semi-structured interview with youths’ psychiatric, developmental, and family mental health histories. Independent Bayesian estimates relied on published risk estimates from other samples discriminated bipolar diagnoses, Area Under Curve=.75, p<.00005. The Bayes and confidence ratings correlated rs =.30. Agreement about an evidence-based assessment intervention “threshold model” (wait/assess/treat) had K=.24, p<.05. No potential moderators of agreement between the Bayesian estimates and confidence ratings, including type of bipolar illness, were significant. Bayesian risk estimates were highly correlated with logistic regression estimates using optimal sample weights, r=.81, p<.0005. Clinical and Bayesian approaches agree in terms of overall concordance and deciding next clinical action, even when Bayesian predictions are based on published estimates from clinically and demographically different samples. Evidence-based assessment methods may be useful in settings that cannot routinely employ gold standard assessments, and they may help decrease rates of overdiagnosis while promoting earlier identification of true cases. PMID:22004538

  10. Estimating an exchange rate between the EQ-5D-3L and ASCOT.

    PubMed

    Stevens, Katherine; Brazier, John; Rowen, Donna

    2018-06-01

    The aim was to estimate an exchange rate between EQ-5D-3L and the Adult Social Care Outcome Tool (ASCOT) using preference-based mapping via common time trade-off (TTO) valuations. EQ-5D and ASCOT are useful for examining cost-effectiveness within the health and social care sectors, respectively, but there is a policy need to understand overall benefits and compare across sectors to assess relative value for money. Standard statistical mapping is unsuitable since it relies on conceptual overlap of the measures but EQ-5D and ASCOT have different conceptualisations of quality of life. We use a preference-based mapping approach to estimate the exchange rate using common TTO valuations for both measures. A sample of health states from each measure was valued using TTO by 200 members of the UK adult general population. Regression analyses are used to generate separate equations between EQ-5D-3L and ASCOT values using their original value set and TTO values elicited here. These are solved as simultaneous equations to estimate the relationship between EQ-5D-3L and ASCOT. The relationship for moving from ASCOT to EQ-5D-3L is a linear transformation with an intercept of -0.0488 and gradient of 0.978. This enables QALY gains generated by ASCOT and EQ-5D to be compared across different interventions. This paper estimated an exchange rate between ASCOT and EQ-5D-3L using a preference-based mapping approach that does not compromise the descriptive systems of the two measures. This contributes to the development of preference-based mapping through the use of TTO as the common metric used to estimate the exchange rate between measures.

  11. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  12. Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.

    PubMed

    Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk

    2017-03-01

    Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.

  13. Individual differences in rate of encoding predict estimates of visual short-term memory capacity (K).

    PubMed

    Jannati, Ali; McDonald, John J; Di Lollo, Vincent

    2015-06-01

    The capacity of visual short-term memory (VSTM) is commonly estimated by K scores obtained with a change-detection task. Contrary to common belief, K may be influenced not only by capacity but also by the rate at which stimuli are encoded into VSTM. Experiment 1 showed that, contrary to earlier conclusions, estimates of VSTM capacity obtained with a change-detection task are constrained by temporal limitations. In Experiment 2, we used change-detection and backward-masking tasks to obtain separate within-subject estimates of K and of rate of encoding, respectively. A median split based on rate of encoding revealed significantly higher K estimates for fast encoders. Moreover, a significant correlation was found between K and the estimated rate of encoding. The present findings raise the prospect that the reported relationships between K and such cognitive concepts as fluid intelligence may be mediated not only by VSTM capacity but also by rate of encoding. (c) 2015 APA, all rights reserved).

  14. Calibrating a population-based job-exposure matrix using inspection measurements to estimate historical occupational exposure to lead for a population-based cohort in Shanghai, China

    PubMed Central

    Koh, Dong-Hee; Bhatti, Parveen; Coble, Joseph B.; Stewart, Patricia A; Lu, Wei; Shu, Xiao-Ou; Ji, Bu-Tian; Xue, Shouzheng; Locke, Sarah J.; Portengen, Lutzen; Yang, Gong; Chow, Wong-Ho; Gao, Yu-Tang; Rothman, Nathaniel; Vermeulen, Roel; Friesen, Melissa C.

    2012-01-01

    The epidemiologic evidence for the carcinogenicity of lead is inconsistent and requires improved exposure assessment to estimate risk. We evaluated historical occupational lead exposure for a population-based cohort of women (n=74,942) by calibrating a job-exposure matrix (JEM) with lead fume (n=20,084) and lead dust (n=5,383) measurements collected over four decades in Shanghai, China. Using mixed-effect models, we calibrated intensity JEM ratings to the measurements using fixed-effects terms for year and JEM rating. We developed job/industry-specific estimates from the random-effects terms for job and industry. The model estimates were applied to subjects’ jobs when the JEM probability rating was high for either job or industry; remaining jobs were considered unexposed. The models predicted that exposure increased monotonically with JEM intensity rating and decreased 20–50-fold over time. The cumulative calibrated JEM estimates and job/industry-specific estimates were highly correlated (Pearson correlation=0.79–0.84). Overall, 5% of the person-years and 8% of the women were exposed to lead fume; 2% of the person-years and 4% of the women were exposed to lead dust. The most common lead-exposed jobs were manufacturing electronic equipment. These historical lead estimates should enhance our ability to detect associations between lead exposure and cancer risk in future epidemiologic analyses. PMID:22910004

  15. Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny

    PubMed Central

    Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.

    2012-01-01

    An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will bias estimates of divergence time, we suggest mutation rate estimates be used until better models are available. PMID:22683811

  16. Non-contact estimation of heart rate and oxygen saturation using ambient light.

    PubMed

    Bal, Ufuk

    2015-01-01

    We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings.

  17. Non-contact estimation of heart rate and oxygen saturation using ambient light

    PubMed Central

    Bal, Ufuk

    2014-01-01

    We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings PMID:25657877

  18. Preliminary data suggest rates of male military sexual trauma may be higher than previously reported.

    PubMed

    Sheppard, Sean C; Hickling, Edward J; Earleywine, Mitch; Hoyt, Tim; Russo, Amanda R; Donati, Matthew R; Kip, Kevin E

    2015-11-01

    Stigma associated with disclosing military sexual trauma (MST) makes estimating an accurate base rate difficult. Anonymous assessment may help alleviate stigma. Although anonymous research has found higher rates of male MST, no study has evaluated whether providing anonymity sufficiently mitigates the impact of stigma on accurate reporting. This study used the unmatched count technique (UCT), a form of randomized response techniques, to gain information about the accuracy of base rate estimates of male MST derived via anonymous assessment of Operation Enduring Freedom (OEF)/Operation Iraqi Freedom (OIF) combat veterans. A cross-sectional convenience sample of 180 OEF/OIF male combat veterans, recruited via online websites for military populations, provided data about history of MST via traditional anonymous self-report and the UCT. The UCT revealed a rate of male MST more than 15 times higher than the rate derived via traditional anonymous assessment (1.1% vs. 17.2%). These data suggest that anonymity does not adequately mitigate the impact of stigma on disclosure of male MST. Results, though preliminary, suggest that published rates of male MST may substantially underestimate the true rate of this problem. The UCT has significant potential to improve base rate estimation of sensitive behaviors in the military. (c) 2015 APA, all rights reserved).

  19. The discrepancy between emotional vs. rational estimates of body size, actual size, and ideal body ratings: theoretical and clinical implications.

    PubMed

    Thompson, J K; Dolce, J J

    1989-05-01

    Thirty-two asymptomatic college females were assessed on multiple aspects of body image. Subjects' estimation of the size of three body sites (waist, hips, thighs) was affected by instructional protocol. Emotional ratings, based on how they "felt" about their body, elicited ratings that were larger than actual and ideal size measures. Size ratings based on rational instructions were no different from actual sizes, but were larger than ideal ratings. There were no differences between actual and ideal sizes. The results are discussed with regard to methodological issues involved in body image research. In addition, a working hypothesis that differentiates affective/emotional from cognitive/rational aspects of body size estimation is offered to complement current theories of body image. Implications of the findings for the understanding of body image and its relationship to eating disorders are discussed.

  20. Specificity control for read alignments using an artificial reference genome-guided false discovery rate.

    PubMed

    Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y

    2014-01-01

    Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.

  1. Combining a Job-Exposure Matrix with Exposure Measurements to Assess Occupational Exposure to Benzene in a Population Cohort in Shanghai, China

    PubMed Central

    Friesen, Melissa C.; Coble, Joseph B.; Lu, Wei; Shu, Xiao-Ou; Ji, Bu-Tian; Xue, Shouzheng; Portengen, Lutzen; Chow, Wong-Ho; Gao, Yu-Tang; Yang, Gong; Rothman, Nathaniel; Vermeulen, Roel

    2012-01-01

    Background: Generic job-exposure matrices (JEMs) are often used in population-based epidemiologic studies to assess occupational risk factors when only the job and industry information of each subject is available. JEM ratings are often based on professional judgment, are usually ordinal or semi-quantitative, and often do not account for changes in exposure over time. We present an empirical Bayesian framework that combines ordinal subjective JEM ratings with benzene measurements. Our aim was to better discriminate between job, industry, and time differences in exposure levels compared to using a JEM alone. Methods: We combined 63 221 short-term area air measurements of benzene exposure (1954–2000) collected during routine health and safety inspections in Shanghai, China, with independently developed JEM intensity ratings for each job and industry using a mixed-effects model. The fixed-effects terms included the JEM intensity ratings for job and industry (both ordinal, 0–3) and a time trend that we incorporated as a b-spline. The random-effects terms included job (n = 33) and industry nested within job (n = 399). We predicted the benzene concentration in two ways: (i) a calibrated JEM estimate was calculated using the fixed-effects model parameters for calendar year and JEM intensity ratings; (ii) a job-/industry-specific estimate was calculated using the fixed-effects model parameters and the best linear unbiased predictors from the random effects for job and industry using an empirical Bayes estimation procedure. Finally, we applied the predicted benzene exposures to a prospective population-based cohort of women in Shanghai, China (n = 74 942). Results: Exposure levels were 13 times higher in 1965 than in 2000 and declined at a rate that varied from 4 to 15% per year from 1965 to 1985, followed by a small peak in the mid-1990s. The job-/industry-specific estimates had greater differences between exposure levels than the calibrated JEM estimates (97.5th percentile/2.5th percentile exposure level, BGR95B: 20.4 versus 3.0, respectively). The calibrated JEM and job-/industry-specific estimates were moderately correlated in any given year (Pearson correlation, rp = 0.58). We classified only those jobs and industries with a job or industry JEM exposure probability rating of 3 (>50% of workers exposed) as exposed. As a result, 14.8% of the subjects and 8.7% of the employed person-years in the study population were classified as benzene exposed. The cumulative exposure metrics based on the calibrated JEM and job-/industry-specific estimates were highly correlated (rp = 0.88). Conclusions: We provide a useful framework for combining quantitative exposure data with expert-based exposure ratings in population-based studies that maximized the information from both sources. Our framework calibrated the ratings to a concentration scale between ratings and across time and provided a mechanism to estimate exposure when a job/industry group reported by a subject was not represented in the exposure database. It also allowed the job/industry groups’ exposure levels to deviate from the pooled average for their respective JEM intensity ratings. PMID:21976309

  2. Are Methods for Estimating Primary Production and the Growth Rates of Phytoplankton Approaching Agreement?

    NASA Astrophysics Data System (ADS)

    Cullen, J. J.

    2016-02-01

    During the 1980s, estimates of primary productivity and the growth rates of phytoplankton in oligotrophic waters were controversial, in part because rates based on seasonal accumulations of oxygen in the shallow oxygen maximum were reported to be much higher than could be accounted for with measurements of photosynthesis based on incubations with C-14. Since then, much has changed: tested and standardized methods have been employed to collect comprehensive time-series observations of primary production and related oceanographic properties in oligotrophic waters of the North Pacific subtropical gyre and the Sargasso Sea; technical and theoretical advances have led to new tracer-based estimates of photosynthesis (e.g., oxygen/argon and triple isotopes of dissolved oxygen); and biogeochemical sensor systems on ocean gliders and profiling floats can describe with unprecedented resolution the dynamics of phytoplankton, oxygen and nitrate as driven by growth, loss processes including grazing, and vertical migration for nutrient acquisition. Meanwhile, the estimation of primary productivity, phytoplankton biomass and phytoplankton growth rates from remote sensing of ocean color has matured, complementing biogeochemical models that describe and predict these key properties of plankton dynamics. In a selective review focused on well-studied oligotrophic waters, I compare methods for estimating the primary productivity and growth rates of phytoplankton to see if they are converging on agreement, not only in the estimated rates, but also in the underlying assumptions, such as the ratio of gross- to net primary production — and how this relates to the measurement — and the ratio of chlorophyll to carbon in phytoplankton. Examples of agreement are encouraging, but some stark contrasts illustrate the need for improved mechanistic understanding of exactly what each method is measuring.

  3. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyldesley, Scott, E-mail: styldesl@bccancer.bc.c; Delaney, Geoff; Foroudi, Farshad

    Purpose: Estimates of the need for radiotherapy (RT) using different methods (criterion based benchmarking [CBB]and the Canadian [C-EBEST]and Australian [A-EBEST]epidemiologically based estimates) exist for various cancer sites. We compared these model estimates to actual RT rates for lung, breast, and prostate cancers in British Columbia (BC). Methods and Materials: All cases of lung, breast, and prostate cancers in BC from 1997 to 2004 and all patients receiving RT within 1 year (RT{sub 1Y}) and within 5 years (RT{sub 5Y}) of diagnosis were identified. The RT{sub 1Y} and RT{sub 5Y} proportions in health regions with a cancer center for the mostmore » recent year were then calculated. RT rates were compared with CBB and EBEST estimates of RT needs. Variation was assessed by time and region. Results: The RT{sub 1Y} in regions with a cancer center for lung, breast, and prostate cancers were 51%, 58%, and 33% compared with 45%, 57%, and 32% for C-EBEST and 41%, 61%, and 37% for CBB models. The RT{sub 5Y} rates in regions with a cancer center for lung, breast, and prostate cancers were 59%, 61%, and 40% compared with 61%, 66%, and 61% for C-EBEST and 75%, 83%, and 60% for A-EBEST models. The RT{sub 1Y} rates increased for breast and prostate cancers. Conclusions: C-EBEST and CBB model estimates are closer to the actual RT rates than the A-EBEST estimates. Application of these model estimates by health care decision makers should be undertaken with an understanding of the methods used and the assumptions on which they were based.« less

  5. Predation rates by North Sea cod (Gadus morhua) - Predictions from models on gastric evacuation and bioenergetics

    USGS Publications Warehouse

    Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.

    1996-01-01

    We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea

  6. Estimation of physical work load by statistical analysis of the heart rate in a conveyor-belt worker.

    PubMed

    Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J

    1994-12-01

    Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.

  7. Estimating site occupancy rates when detection probabilities are less than one

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.

    2002-01-01

    Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.

  8. Usefulness of cancer-free survival in estimating the lifetime attributable risk of cancer incidence from radiation exposure.

    PubMed

    Seo, Songwon; Lee, Dal Nim; Jin, Young Woo; Lee, Won Jin; Park, Sunhoo

    2018-05-11

    Risk projection models estimating the lifetime cancer risk from radiation exposure are generally based on exposure dose, age at exposure, attained age, gender and study-population-specific factors such as baseline cancer risks and survival rates. Because such models have mostly been based on the Life Span Study cohort of Japanese atomic bomb survivors, the baseline risks and survival rates in the target population should be considered when applying the cancer risk. The survival function used in the risk projection models that are commonly used in the radiological protection field to estimate the cancer risk from medical or occupational exposure is based on all-cause mortality. Thus, it may not be accurate for estimating the lifetime risk of high-incidence but not life-threatening cancer with a long-term survival rate. Herein, we present the lifetime attributable risk (LAR) estimates of all solid cancers except thyroid cancer, thyroid cancer, and leukemia except chronic lymphocytic leukemia in South Korea for lifetime exposure to 1 mGy per year using the cancer-free survival function, as recently applied in the Fukushima health risk assessment by the World Health Organization. Compared with the estimates of LARs using an overall survival function solely based on all-cause mortality, the LARs of all solid cancers except thyroid cancer, and thyroid cancer evaluated using the cancer-free survival function, decreased by approximately 13% and 1% for men and 9% and 5% for women, respectively. The LAR of leukemia except chronic lymphocytic leukemia barely changed for either gender owing to the small absolute difference between its incidence and mortality. Given that many cancers have a high curative rate and low mortality rate, using a survival function solely based on all-cause mortality may cause an overestimation of the lifetime risk of cancer incidence. The lifetime fractional risk was robust against the choice of survival function.

  9. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  10. Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.

    2015-01-01

    Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…

  11. Determination of Time Dependent Virus Inactivation Rates

    NASA Astrophysics Data System (ADS)

    Chrysikopoulos, C. V.; Vogler, E. T.

    2003-12-01

    A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.

  12. Rates of microbial metabolism in deep coastal plain aquifers

    USGS Publications Warehouse

    Chapelle, F.H.; Lovley, D.R.

    1990-01-01

    Rates of microbial metabolism in deep anaerobic aquifers of the Atlantic coastal plain of South Carolina were investigated by both microbiological and geochemical techniques. Rates of [2-14C]acetate and [U-14C]glucose oxidation as well as geochemical evidence indicated that metabolic rates were faster in the sandy sediments composing the aquifers than in the clayey sediments of the confining layers. In the sandy aquifer sediments, estimates of the rates of CO2 production (millimoles of CO2 per liter per year) based on the oxidation of [2-14C]acetate were 9.4 x 10-3 to 2.4 x 10-1 for the Black Creek aquifer, 1.1 x 10-2 for the Middendorf aquifer, and <7 x 10-5 for the Cape Fear aquifer. These estimates were at least 2 orders of magnitude lower than previously published estimates that were based on the accumulation of CO2 in laboratory incubations of similar deep subsurface sediments. In contrast, geochemical modeling of groundwater chemistry changes along aquifer flowpaths gave rate estimates that ranged from 10-4 to 10-6 mmol of CO2 per liter per year. The age of these sediments (ca. 80 million years) and their organic carbon content suggest that average rates of CO2 production could have been no more than 10-4 mmol per liter per year. Thus, laboratory incubations may greatly overestimate the in situ rates of microbial metabolism in deep subsurface environments. This has important implications for the use of laboratory incubations in attempts to estimate biorestoration capacities of deep aquifers. The rate estimates from geochemical modeling indicate that deep aquifers are among the most oligotrophic aquatic environments in which there is ongoing microbial metabolism.

  13. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    NASA Astrophysics Data System (ADS)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  14. Savannah River Site generic data base development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, C.H.; Eide, S.A.

    This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less

  15. Inferring invasive species abundance using removal data from management actions

    USGS Publications Warehouse

    Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.

    2016-01-01

    Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.

  16. Evaluation of the metabolic rate based on the recording of the heart rate.

    PubMed

    Malchaire, Jacques; d'AMBROSIO Alfano, Francesca Romana; Palella, Boris Igor

    2017-06-08

    The assessment of harsh working conditions requires a correct evaluation of the metabolic rate. This paper revises the basis described in the ISO 8996 standard for the evaluation of the metabolic rate at a work station from the recording of the heart rate of a worker during a representative period of time. From a review of the literature, formulas different from those given in the standard are proposed to estimate the maximum working capacity, the maximum heart rate, the heart rate and the metabolic rate at rest and the relation (HR vs. M) at the basis of the estimation of the equivalent metabolic rate, as a function of the age, height and weight of the person. A Monte Carlo simulation is used to determine, from the approximations of these parameters and formulas, the imprecision of the estimated equivalent metabolic rate. The results show that the standard deviation of this estimate varies from 10 to 15%.

  17. Evaluation of the metabolic rate based on the recording of the heart rate

    PubMed Central

    MALCHAIRE, Jacques; ALFANO, Francesca Romana d’AMBROSIO; PALELLA, Boris Igor

    2017-01-01

    The assessment of harsh working conditions requires a correct evaluation of the metabolic rate. This paper revises the basis described in the ISO 8996 standard for the evaluation of the metabolic rate at a work station from the recording of the heart rate of a worker during a representative period of time. From a review of the literature, formulas different from those given in the standard are proposed to estimate the maximum working capacity, the maximum heart rate, the heart rate and the metabolic rate at rest and the relation (HR vs. M) at the basis of the estimation of the equivalent metabolic rate, as a function of the age, height and weight of the person. A Monte Carlo simulation is used to determine, from the approximations of these parameters and formulas, the imprecision of the estimated equivalent metabolic rate. The results show that the standard deviation of this estimate varies from 10 to 15%. PMID:28250334

  18. Estimating wildland fire rate of spread in a spatially nonuniform environment

    Treesearch

    Francis M Fujioka

    1985-01-01

    Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...

  19. Clinical Relevance of Differences in Glomerular Filtration Rate Estimations in Frail Older People by Creatinine- vs. Cystatin C-Based Formulae.

    PubMed

    Jacobs, Anne; Benraad, Carolien; Wetzels, Jack; Rikkert, Marcel Olde; Kramers, Cornelis

    2017-06-01

    The risk of incorrect medication dosing is high in frail older people. Therefore, accurate assessment of the glomerular filtration rate is important. The objective of this study was to compare the estimated glomerular filtration rate using creatinine- and cystatin C-based formulae, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations, in frail older people. We hypothesized that frailty determines the difference between the creatinine- and cystatin C-based formulae. The mean difference between CKD-EPI creatinine and cystatin C was determined using (cross-sectional) data of 55 patients (mean age 73 years) admitted to a psychiatric ward for older adults. The level of agreement of these estimations was assessed by a Bland-Altman analysis. In all patients, the Rockwood's Frailty Index was derived and correlated with the mean difference between CKD-EPI creatinine and cystatin C. The mean difference between CKD-EPI creatinine (mean 71.2 mL/min/1.73 m 2 ) and CKD-EPI cystatin C (mean 57.6 mL/min/1.73 m 2 ) was 13.6 mL/min/1.73 m 2 (p < 0.0001). The two standard deviation limit in the Bland-Altman plot was large (43.2 mL/min/1.73 m 2 ), which represents a low level of agreement. The Frailty Index did not correlate with the mean difference between the creatinine- and cystatin C-based glomerular filtration rate (Pearson correlation coefficient 0.182, p = 0.184). There was a significant gap between a creatinine- and cystatin C-based estimation of glomerular filtration rate, irrespective of frailty. The range of differences between the commonly used estimated glomerular filtration rate formulae might result in clinically relevant differences in drug prescription and differences in chronic kidney disease staging.

  20. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  1. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Comparing facility-level methane emission rate estimates at natural gas gathering and boosting stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughn, Timothy L.; Bell, Clay S.; Yacovitch, Tara I.

    Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA). On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates ('study on-site estimates (SOE)') comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust), which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted bymore » modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83-0.99), R2 = 0.89). Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER) estimated by tracer measurements in this study is 17-73% higher than a prior national study by Marchese et al.« less

  3. Comparing facility-level methane emission rate estimates at natural gas gathering and boosting stations

    DOE PAGES

    Vaughn, Timothy L.; Bell, Clay S.; Yacovitch, Tara I.; ...

    2017-02-09

    Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA). On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates ('study on-site estimates (SOE)') comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust), which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted bymore » modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83-0.99), R2 = 0.89). Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER) estimated by tracer measurements in this study is 17-73% higher than a prior national study by Marchese et al.« less

  4. Estimating the Burden of Osteoarthritis to Plan for the Future.

    PubMed

    Marshall, Deborah A; Vanderby, Sonia; Barnabe, Cheryl; MacDonald, Karen V; Maxwell, Colleen; Mosher, Dianne; Wasylak, Tracy; Lix, Lisa; Enns, Ed; Frank, Cy; Noseworthy, Tom

    2015-10-01

    With aging and obesity trends, the incidence and prevalence of osteoarthritis (OA) is expected to rise in Canada, increasing the demand for health resources. Resource planning to meet this increasing need requires estimates of the anticipated number of OA patients. Using administrative data from Alberta, we estimated OA incidence and prevalence rates and examined their sensitivity to alternative case definitions. We identified cases in a linked data set spanning 1993 to 2010 (population registry, Discharge Abstract Database, physician claims, Ambulatory Care Classification System, and prescription drug data) using diagnostic codes and drug identification numbers. In the base case, incident cases were captured for patients with an OA diagnostic code for at least 2 physician visits within 2 years or any hospital admission. Seven alternative case definitions were applied and compared. Age- and sex-standardized incidence and prevalence rates were estimated to be 8.6 and 80.3 cases per 1,000 population, respectively, in the base case. Physician claims data alone captured 88% of OA cases. Prevalence rate estimates required 15 years of longitudinal data to plateau. Compared to the base case, estimates are sensitive to alternative case definitions. Administrative databases are a key source for estimating the burden and epidemiologic trends of chronic diseases such as OA in Canada. Despite their limitations, these data provide valuable information for estimating disease burden and planning health services. Estimates of OA are mostly defined through physician claims data and require a long period of longitudinal data. © 2015, American College of Rheumatology.

  5. Evaluation of bursal depth as an indicator of age class of harlequin ducks

    USGS Publications Warehouse

    Mather, D.D.; Esler, Daniel N.

    1999-01-01

    We contrasted the estimated age class of recaptured Harlequin Ducks (Histrionicus histrionicus) (n = 255) based on bursal depth with expected age class based on bursal depth at first capture and time since first capture. Although neither estimated nor expected ages can be assumed to be correct, rates of discrepancies between the two for within-year recaptures indicate sampling error, while between-year recaptures test assumptions about rates of bursal involution. Within-year, between-year, and overall discrepancy rates were 10%, 24%, and 18%, respectively. Most (86%) between-year discrepancies occurred for birds expected to be after-third-year (ATY) but estimated to be third-year (TY). Of these ATY-TY discrepancies, 22 of 25 (88%) birds had bursal depths of 2 or 3 mm. Further, five of six between-year recaptures that were known to be ATY but estimated to be TY had 2 mm bursas. Reclassifying birds with 2 or 3 mm bursas as ATY resulted in reduction in between-year (24% to 10%) and overall (18% to 11%) discrepancy rates. We conclude that age determination of Harlequin Ducks based on bursal depth, particularly using our modified criteria, is a relatively consistent and reliable technique.

  6. Determining the rate of forest conversion in Mato Grosso, Brazil, using Landsat MSS and AVHRR data

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Horning, Ned; Stone, Thomas A.

    1987-01-01

    AVHRR-LAC thermal data and Landsat MSS and TM spectral data were used to estimate the rate of forest clearing in Mato Grosso, Brazil, between 1981 and 1984. The Brazilian state was stratified into forest and nonforest. A list sampling procedure was used in the forest stratum to select Landsat MSS scenes for processing based on estimates of fire activity in the scenes. Fire activity in 1984 was estimated using AVHRR-LAC thermal data. State-wide estimates of forest conversion indicate that between 1981 and 1984, 353,966 ha + or - 77,000 ha (0.4 percent of the state area) were converted per year. No evidence of reforestation was found in this digital sample. The relationship between forest clearing rate (based on MSS-TM analysis) and fire activity (estimated using AVHRR data) was noisy (R-squared = 0.41). The results suggest that AVHRR data may be put to better use as a stratification tool than as a subsidiary variable in list sampling.

  7. Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct

    PubMed Central

    Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan

    2013-01-01

    Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650

  8. Population growth rates of reef sharks with and without fishing on the great barrier reef: robust estimation with multiple models.

    PubMed

    Hisano, Mizue; Connolly, Sean R; Robbins, William D

    2011-01-01

    Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.

  9. Population Growth Rates of Reef Sharks with and without Fishing on the Great Barrier Reef: Robust Estimation with Multiple Models

    PubMed Central

    Hisano, Mizue; Connolly, Sean R.; Robbins, William D.

    2011-01-01

    Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402

  10. A simple mathematical method to estimate ammonia emission from in-house windrowing of poultry litter.

    PubMed

    Ro, Kyoung S; Szogi, Ariel A; Moore, Philip A

    2018-05-12

    In-house windrowing between flocks is an emerging sanitary management practice to partially disinfect the built-up litter in broiler houses. However, this practice may also increase ammonia (NH 3 ) emission from the litter due to the increase in litter temperature. The objectives of this study were to develop mathematical models to estimate NH 3 emission rates from broiler houses practicing in-house windrowing between flocks. Equations to estimate mass-transfer areas form different shapes windrowed litter (triangular, rectangular, and semi-cylindrical prisms) were developed. Using these equations, the heights of windrows yielding the smallest mass-transfer area were estimated. Smaller mass-transfer area is preferred as it reduces both emission rates and heat loss. The heights yielding the minimum mass-transfer area were 0.8 and 0.5 m for triangular and rectangular windrows, respectively. Only one height (0.6 m) was theoretically possible for semi-cylindrical windrows because the base and the height were not independent. Mass-transfer areas were integrated with published process-based mathematical models to estimate the total house NH 3 emission rates during in-house windrowing of poultry litter. The NH 3 emission rate change calculated from the integrated model compared well with the observed values except for the very high NH 3 initial emission rate from mechanically disturbing the litter to form the windrows. This approach can be used to conveniently estimate broiler house NH 3 emission rates during in-house windrowing between flocks by simply measuring litter temperatures.

  11. Nomograms for predicting graft function and survival in living donor kidney transplantation based on the UNOS Registry.

    PubMed

    Tiong, H Y; Goldfarb, D A; Kattan, M W; Alster, J M; Thuita, L; Yu, C; Wee, A; Poggio, E D

    2009-03-01

    We developed nomograms that predict transplant renal function at 1 year (Modification of Diet in Renal Disease equation [estimated glomerular filtration rate]) and 5-year graft survival after living donor kidney transplantation. Data for living donor renal transplants were obtained from the United Network for Organ Sharing registry for 2000 to 2003. Nomograms were designed using linear or Cox regression models to predict 1-year estimated glomerular filtration rate and 5-year graft survival based on pretransplant information including demographic factors, immunosuppressive therapy, immunological factors and organ procurement technique. A third nomogram was constructed to predict 5-year graft survival using additional information available by 6 months after transplantation. These data included delayed graft function, any treated rejection episodes and the 6-month estimated glomerular filtration rate. The nomograms were internally validated using 10-fold cross-validation. The renal function nomogram had an r-square value of 0.13. It worked best when predicting estimated glomerular filtration rate values between 50 and 70 ml per minute per 1.73 m(2). The 5-year graft survival nomograms had a concordance index of 0.71 for the pretransplant nomogram and 0.78 for the 6-month posttransplant nomogram. Calibration was adequate for all nomograms. Nomograms based on data from the United Network for Organ Sharing registry have been validated to predict the 1-year estimated glomerular filtration rate and 5-year graft survival. These nomograms may facilitate individualized patient care in living donor kidney transplantation.

  12. Atmospheric Sulfur Hexafluoride: Sources, Sinks and Greenhouse Warming

    NASA Technical Reports Server (NTRS)

    Sze, Nien Dak; Wang, Wei-Chyung; Shia, George; Goldman, Aaron; Murcray, Frank J.; Murcray, David G.; Rinsland, Curtis P.

    1993-01-01

    Model calculations using estimated reaction rates of sulfur hexafluoride (SF6) with OH and 0('D) indicate that the atmospheric lifetime due to these processes may be very long (25,000 years). An upper limit for the UV cross section would suggest a photolysis lifetime much longer than 1000 years. The possibility of other removal mechanisms are discussed. The estimated lifetimes are consistent with other estimated values based on recent laboratory measurements. There appears to be no known natural source of SF6. An estimate of the current production rate of SF6 is about 5 kt/yr. Based on historical emission rates, we calculated a present-day atmospheric concentrations for SF6 of about 2.5 parts per trillion by volume (pptv) and compared the results with available atmospheric measurements. It is difficult to estimate the atmospheric lifetime of SF6 based on mass balance of the emission rate and observed abundance. There are large uncertainties concerning what portion of the SF6 is released to the atmosphere. Even if the emission rate were precisely known, it would be difficult to distinguish among lifetimes longer than 100 years since the current abundance of SF6 is due to emission in the past three decades. More information on the measured trends over the past decade and observed vertical and latitudinal distributions of SF6 in the lower stratosphere will help to narrow the uncertainty in the lifetime. Based on laboratory-measured IR absorption cross section for SF6, we showed that SF6 is about 3 times more effective as a greenhouse gas compared to CFC 11 on a per molecule basis. However, its effect on atmospheric warming will be minimal because of its very small concentration. We estimated the future concentration of SF6 at 2010 to be 8 and 10 pptv based on two projected emission scenarios. The corresponding equilibrium warming of 0.0035 C and 0.0043 C is to be compared with the estimated warming due to CO2 increase of about 0.8 C in the same period.

  13. Individual Differences in Base Rate Neglect: A Fuzzy Processing Preference Index

    PubMed Central

    Wolfe, Christopher R.; Fisher, Christopher R.

    2013-01-01

    Little is known about individual differences in integrating numeric base-rates and qualitative text in making probability judgments. Fuzzy-Trace Theory predicts a preference for fuzzy processing. We conducted six studies to develop the FPPI, a reliable and valid instrument assessing individual differences in this fuzzy processing preference. It consists of 19 probability estimation items plus 4 "M-Scale" items that distinguish simple pattern matching from “base rate respect.” Cronbach's Alpha was consistently above 0.90. Validity is suggested by significant correlations between FPPI scores and three other measurers: "Rule Based" Process Dissociation Procedure scores; the number of conjunction fallacies in joint probability estimation; and logic index scores on syllogistic reasoning. Replicating norms collected in a university study with a web-based study produced negligible differences in FPPI scores, indicating robustness. The predicted relationships between individual differences in base rate respect and both conjunction fallacies and syllogistic reasoning were partially replicated in two web-based studies. PMID:23935255

  14. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  15. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  16. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  17. Population-Scale Sequencing Data Enable Precise Estimates of Y-STR Mutation Rates

    PubMed Central

    Willems, Thomas; Gymrek, Melissa; Poznik, G. David; Tyler-Smith, Chris; Erlich, Yaniv

    2016-01-01

    Short tandem repeats (STRs) are mutation-prone loci that span nearly 1% of the human genome. Previous studies have estimated the mutation rates of highly polymorphic STRs by using capillary electrophoresis and pedigree-based designs. Although this work has provided insights into the mutational dynamics of highly mutable STRs, the mutation rates of most others remain unknown. Here, we harnessed whole-genome sequencing data to estimate the mutation rates of Y chromosome STRs (Y-STRs) with 2–6 bp repeat units that are accessible to Illumina sequencing. We genotyped 4,500 Y-STRs by using data from the 1000 Genomes Project and the Simons Genome Diversity Project. Next, we developed MUTEA, an algorithm that infers STR mutation rates from population-scale data by using a high-resolution SNP-based phylogeny. After extensive intrinsic and extrinsic validations, we harnessed MUTEA to derive mutation-rate estimates for 702 polymorphic STRs by tracing each locus over 222,000 meioses, resulting in the largest collection of Y-STR mutation rates to date. Using our estimates, we identified determinants of STR mutation rates and built a model to predict rates for STRs across the genome. These predictions indicate that the load of de novo STR mutations is at least 75 mutations per generation, rivaling the load of all other known variant types. Finally, we identified Y-STRs with potential applications in forensics and genetic genealogy, assessed the ability to differentiate between the Y chromosomes of father-son pairs, and imputed Y-STR genotypes. PMID:27126583

  18. A revised timescale for human evolution based on ancient mitochondrial genomes

    PubMed Central

    Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes

    2016-01-01

    Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248

  19. A revised timescale for human evolution based on ancient mitochondrial genomes.

    PubMed

    Fu, Qiaomei; Mittnik, Alissa; Johnson, Philip L F; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes

    2013-04-08

    Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Here, we use mitochondrial genome sequences from ten securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) that occurred less than 62-95 kya. Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population divergence times, they can provide valid upper bounds. Our results exclude most of the older dates for African and non-African population divergences recently suggested by de novo mutation rate estimates in the nuclear genome. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Development and corroboration of a bioenergetics model for northern pikeminnow (Ptychocheilus oregonensis) feeding on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, J.H.; Ward, D.L.

    1999-01-01

    A bioenergetics model was developed and corroborated for northern pikeminnow Ptychocheilus oregonensis, an important predator on juvenile salmonids in the Pacific Northwest. Predictions of modeled predation rate on salmonids were compared with field data from three areas of John Day Reservoir (Columbia River). To make bioenergetics model estimates of predation rate, three methods were used to approximate the change in mass of average predators during 30-d growth periods: observed change in mass between the first and the second month, predicted change in mass calculated with seasonal growth rates, and predicted change in mass based on an annual growth model. For all reservoir areas combined, bioenergetics model predictions of predation on salmon were 19% lower than field estimates based on observed masses, 45% lower than estimates based on seasonal growth rates, and 15% lower than estimates based on the annual growth model. For each growth approach, the largest differences in field-versus-model predation occurred at the midreservoir area (-84% to -67% difference). Model predictions of the rate of predation on salmonids were examined for sensitivity to parameter variation, swimming speed, sampling bias caused by gear selectivity, and asymmetric size distributions of predators. The specific daily growth rate of northern pikeminnow predicted by the model was highest in July and October and decreased during August. The bioenergetics model for northern pikeminnow performed well compared with models for other fish species that have been tested with field data. This model should be a useful tool for evaluating management actions such as predator removal, examining the influence of temperature on predation rates, and exploring interactions between predators in the Columbia River basin.

  1. Trophic transfer efficiency of methylmercury and inorganic mercury to lake trout Salvelinus namaycush from its prey

    USGS Publications Warehouse

    Madenijian, C.P.; David, S.R.; Krabbenhoft, D.P.

    2012-01-01

    Based on a laboratory experiment, we estimated the net trophic transfer efficiency of methylmercury to lake trout Salvelinus namaycush from its prey to be equal to 76.6 %. Under the assumption that gross trophic transfer efficiency of methylmercury to lake trout from its prey was equal to 80 %, we estimated that the rate at which lake trout eliminated methylmercury was 0.000244 day−1. Our laboratory estimate of methylmercury elimination rate was 5.5 times lower than the value predicted by a published regression equation developed from estimates of methylmercury elimination rates for fish available from the literature. Thus, our results, in conjunction with other recent findings, suggested that methylmercury elimination rates for fish have been overestimated in previous studies. In addition, based on our laboratory experiment, we estimated that the net trophic transfer efficiency of inorganic mercury to lake trout from its prey was 63.5 %. The lower net trophic transfer efficiency for inorganic mercury compared with that for methylmercury was partly attributable to the greater elimination rate for inorganic mercury. We also found that the efficiency with which lake trout retained either methylmercury or inorganic mercury from their food did not appear to be significantly affected by the degree of their swimming activity.

  2. A Trigger-based Design for Evaluating the Safety of In Utero Antiretroviral Exposure in Uninfected Children of Human Immunodeficiency Virus-Infected Mothers

    PubMed Central

    Williams, Paige L.; Seage, George R.; Van Dyke, Russell B.; Siberry, George K.; Griner, Raymond; Tassiopoulos, Katherine; Yildirim, Cenk; Read, Jennifer S.; Huo, Yanling; Hazra, Rohan; Jacobson, Denise L.; Mofenson, Lynne M.; Rich, Kenneth

    2012-01-01

    The Pediatric HIV/AIDS Cohort Study’s Surveillance Monitoring of ART Toxicities Study is a prospective cohort study conducted at 22 US sites between 2007 and 2011 that was designed to evaluate the safety of in utero antiretroviral drug exposure in children not infected with human immunodeficiency virus who were born to mothers who were infected. This ongoing study uses a “trigger-based” design; that is, initial assessments are conducted on all children, and only those meeting certain thresholds or “triggers” undergo more intensive evaluations to determine whether they have had an adverse event (AE). The authors present the estimated rates of AEs for each domain of interest in the Surveillance Monitoring of ART Toxicities Study. They also evaluated the efficiency of this trigger-based design for estimating AE rates and for testing associations between in utero exposures to antiretroviral drugs and AEs. The authors demonstrate that estimated AE rates from the trigger-based design are unbiased after correction for the sensitivity of the trigger for identifying AEs. Even without correcting for bias based on trigger sensitivity, the trigger approach is generally more efficient for estimating AE rates than is evaluating a random sample of the same size. Minor losses in efficiency when comparing AE rates between persons exposed and unexposed in utero to particular antiretroviral drugs or drug classes were observed under most scenarios. PMID:22491086

  3. Shell productivity of the large benthic foraminifer Baculogypsina sphaerulata, based on the population dynamics in a tropical reef environment

    NASA Astrophysics Data System (ADS)

    Fujita, Kazuhiko; Otomaru, Maki; Lopati, Paeniu; Hosono, Takashi; Kayanne, Hajime

    2016-03-01

    Carbonate production by large benthic foraminifers is sometimes comparable to that of corals and coralline algae, and contributes to sedimentation on reef islands and beaches in the tropical Pacific. Population dynamic data, such as population density and size structure (size-frequency distribution), are vital for an accurate estimation of shell production of foraminifers. However, previous production estimates in tropical environments were based on a limited sampling period with no consideration of seasonality. In addition, no comparisons were made of various estimation methods to determine more accurate estimates. Here we present the annual gross shell production rate of Baculogypsina sphaerulata, estimated based on population dynamics studied over a 2-yr period on an ocean reef flat of Funafuti Atoll (Tuvalu, tropical South Pacific). The population density of B. sphaerulata increased from January to March, when northwest winds predominated and the study site was on the leeward side of reef islands, compared to other seasons when southeast trade winds predominated and the study site was on the windward side. This result suggested that wind-driven flows controlled the population density at the study site. The B. sphaerulata population had a relatively stationary size-frequency distribution throughout the study period, indicating no definite intensive reproductive period in the tropical population. Four methods were applied to estimate the annual gross shell production rates of B. sphaerulata. The production rates estimated by three of the four methods (using monthly biomass, life tables and growth increment rates) were in the order of hundreds of g CaCO3 m-2 yr-1 or cm-3 m-2 yr-1, and the simple method using turnover rates overestimated the values. This study suggests that seasonal surveys should be undertaken of population density and size structure as these can produce more accurate estimates of shell productivity of large benthic foraminifers.

  4. Accuracy of the Estimated Core Temperature (ECTemp) Algorithm in Estimating Circadian Rhythm Indicators

    DTIC Science & Technology

    2017-04-12

    measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate-based extended Kalman Filter CT...based CT-estimation algorithms [7, 13, 14]. One notable example is ECTempTM, which utilizes an extended Kalman Filter to estimate CT from...3. The extended Kalman filter mapping function variance coefficient (Ct) was computed using the following equation: = −9.1428 ×

  5. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  6. Incorporating movement patterns to improve survival estimates for juvenile bull trout

    USGS Publications Warehouse

    Bowerman, Tracy; Budy, Phaedra

    2012-01-01

    Populations of many fish species are sensitive to changes in vital rates during early life stages, but our understanding of the factors affecting growth, survival, and movement patterns is often extremely limited for juvenile fish. These critical information gaps are particularly evident for bull trout Salvelinus confluentus, a threatened Pacific Northwest char. We combined several active and passive mark–recapture and resight techniques to assess migration rates and estimate survival for juvenile bull trout (70–170 mm total length). We evaluated the relative performance of multiple survival estimation techniques by comparing results from a common Cormack–Jolly–Seber (CJS) model, the less widely used Barker model, and a simple return rate (an index of survival). Juvenile bull trout of all sizes emigrated from their natal habitat throughout the year, and thereafter migrated up to 50 km downstream. With the CJS model, high emigration rates led to an extreme underestimate of apparent survival, a combined estimate of site fidelity and survival. In contrast, the Barker model, which allows survival and emigration to be modeled as separate parameters, produced estimates of survival that were much less biased than the return rate. Estimates of age-class-specific annual survival from the Barker model based on all available data were 0.218±0.028 (estimate±SE) for age-1 bull trout and 0.231±0.065 for age-2 bull trout. This research demonstrates the importance of incorporating movement patterns into survival analyses, and we provide one of the first field-based estimates of juvenile bull trout annual survival in relatively pristine rearing conditions. These estimates can provide a baseline for comparison with future studies in more impacted systems and will help managers develop reliable stage-structured population models to evaluate future recovery strategies.

  7. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  8. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  9. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Queen, Steven Z.; VanEepoel, John M.; Sanner, Robert M.

    2005-01-01

    In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a non-linear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.

  10. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    USGS Publications Warehouse

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  11. Estimates of over-diagnosis of breast cancer due to population-based mammography screening in South Australia after adjustment for lead time effects.

    PubMed

    Beckmann, Kerri; Duffy, Stephen W; Lynch, John; Hiller, Janet; Farshid, Gelareh; Roder, David

    2015-09-01

    To estimate over-diagnosis due to population-based mammography screening using a lead time adjustment approach, with lead time measures based on symptomatic cancers only. Women aged 40-84 in 1989-2009 in South Australia eligible for mammography screening. Numbers of observed and expected breast cancer cases were compared, after adjustment for lead time. Lead time effects were modelled using age-specific estimates of lead time (derived from interval cancer rates and predicted background incidence, using maximum likelihood methods) and screening sensitivity, projected background breast cancer incidence rates (in the absence of screening), and proportions screened, by age and calendar year. Lead time estimates were 12, 26, 43 and 53 months, for women aged 40-49, 50-59, 60-69 and 70-79 respectively. Background incidence rates were estimated to have increased by 0.9% and 1.2% per year for invasive and all breast cancer. Over-diagnosis among women aged 40-84 was estimated at 7.9% (0.1-12.0%) for invasive cases and 12.0% (5.7-15.4%) when including ductal carcinoma in-situ (DCIS). We estimated 8% over-diagnosis for invasive breast cancer and 12% inclusive of DCIS cancers due to mammography screening among women aged 40-84. These estimates may overstate the extent of over-diagnosis if the increasing prevalence of breast cancer risk factors has led to higher background incidence than projected. © The Author(s) 2015.

  12. Estimating methane emissions from landfills based on rainfall, ambient temperature, and waste composition: The CLEEN model.

    PubMed

    Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria

    2015-12-01

    Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. STIR Version 1.0 User's Guide for Pesticide Inhalation Risk

    EPA Pesticide Factsheets

    STIR estimates inhalation-type exposure based on pesticide-specific information. It also estimates spray droplet exposure using the application method and rate and then compares these exposure estimates to avian and mammalian toxicity data.

  14. Updated Global Burden of Cholera in Endemic Countries

    PubMed Central

    Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.

    2015-01-01

    Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000

  15. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  16. Towards Photoplethysmography-Based Estimation of Instantaneous Heart Rate During Physical Activity.

    PubMed

    Jarchi, Delaram; Casson, Alexander J

    2017-09-01

    Recently numerous methods have been proposed for estimating average heart rate using photoplethysmography (PPG) during physical activity, overcoming the significant interference that motion causes in PPG traces. We propose a new algorithm framework for extracting instantaneous heart rate from wearable PPG and Electrocardiogram (ECG) signals to provide an estimate of heart rate variability during exercise. For ECG signals, we propose a new spectral masking approach which modifies a particle filter tracking algorithm, and for PPG signals constrains the instantaneous frequency obtained from the Hilbert transform to a region of interest around a candidate heart rate measure. Performance is verified using accelerometry and wearable ECG and PPG data from subjects while biking and running on a treadmill. Instantaneous heart rate provides more information than average heart rate alone. The instantaneous heart rate can be extracted during motion to an accuracy of 1.75 beats per min (bpm) from PPG signals and 0.27 bpm from ECG signals. Estimates of instantaneous heart rate can now be generated from PPG signals during motion. These estimates can provide more information on the human body during exercise. Instantaneous heart rate provides a direct measure of vagal nerve and sympathetic nervous system activity and is of substantial use in a number of analyzes and applications. Previously it has not been possible to estimate instantaneous heart rate from wrist wearable PPG signals.

  17. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2005-01-01

    In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. The development includes an analysis of the estimator stability given errors in the measured attitude. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given, including scenarios with erroneous measured attitudes. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.

  18. A Bayes linear Bayes method for estimation of correlated event rates.

    PubMed

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  19. Estimation of phytoplankton production from space: current status and future potential of satellite remote sensing.

    PubMed

    Joint; Groom

    2000-07-30

    A new generation of ocean colour satellites is now operational, with frequent observation of the global ocean. This paper reviews the potential to estimate marine primary production from satellite images. The procedures involved in retrieving estimates of phytoplankton biomass, as pigment concentrations, are discussed. Algorithms are applied to SeaWiFS ocean colour data to indicate seasonal variations in phytoplankton biomass in the Celtic Sea, on the continental shelf to the south west of the UK. Algorithms to estimate primary production rates from chlorophyll concentration are compared and the advantages and disadvantage discussed. The simplest algorithms utilise correlations between chlorophyll concentration and production rate and one equation is used to estimate daily primary production rates for the western English Channel and Celtic Sea; these estimates compare favourably with published values. Primary production for the central Celtic Sea in the period April to September inclusive is estimated from SeaWiFS data to be 102 gC m(-2) in 1998 and 93 gC m(-2) in 1999; published estimates, based on in situ incubations, are ca. 80 gC m(-2). The satellite data demonstrate large variations in primary production between 1998 and 1999, with a significant increase in late summer in 1998 which did not occur in 1999. Errors are quantified for the estimation of primary production from simple algorithms based on satellite-derived chlorophyll concentration. These data show the potential to obtain better estimates of marine primary production than are possible with ship-based methods, with the ability to detect short-lived phytoplankton blooms. In addition, the potential to estimate new production from satellite data is discussed.

  20. Revisiting typhoid fever surveillance in low and middle income countries: lessons from systematic literature review of population-based longitudinal studies.

    PubMed

    Mogasale, Vittal; Mogasale, Vijayalaxmi V; Ramani, Enusa; Lee, Jung Seok; Park, Ju Yeon; Lee, Kang Sung; Wierzba, Thomas F

    2016-01-29

    The control of typhoid fever being an important public health concern in low and middle income countries, improving typhoid surveillance will help in planning and implementing typhoid control activities such as deployment of new generation Vi conjugate typhoid vaccines. We conducted a systematic literature review of longitudinal population-based blood culture-confirmed typhoid fever studies from low and middle income countries published from 1(st) January 1990 to 31(st) December 2013. We quantitatively summarized typhoid fever incidence rates and qualitatively reviewed study methodology that could have influenced rate estimates. We used meta-analysis approach based on random effects model in summarizing the hospitalization rates. Twenty-two papers presented longitudinal population-based and blood culture-confirmed typhoid fever incidence estimates from 20 distinct sites in low and middle income countries. The reported incidence and hospitalizations rates were heterogeneous as well as the study methodology across the sites. We elucidated how the incidence rates were underestimated in published studies. We summarized six categories of under-estimation biases observed in these studies and presented potential solutions. Published longitudinal typhoid fever studies in low and middle income countries are geographically clustered and the methodology employed has a potential for underestimation. Future studies should account for these limitations.

  1. Estimation of hydrolysis rate constants for carbamates ...

    EPA Pesticide Factsheets

    Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp

  2. Crash data and rates for age-sex groups of drivers, 1996

    DOT National Transportation Integrated Search

    1998-01-01

    The results of this research note are based on 1996data for fatal crashes, driver licenses, and estimates of total crashes based upon data obtained from the nationally representative sample of crashes gathered in the General Estimates System (GES). T...

  3. Utilization of Radiation Therapy in Norway After the Implementation of The National Cancer Plan—A National, Population-Based Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Åsli, Linn M., E-mail: linn.merete.asli@kreftregisteret.no; Kvaløy, Stein O.; Jetne, Vidar

    2014-11-01

    Purpose: To estimate actual utilization rates of radiation therapy (RT) in Norway, describe time trends (1997-2010), and compare these estimates with corresponding optimal RT rates. Methods and Materials: Data from the population-based Cancer Registry of Norway was used to identify all patients diagnosed with cancer and/or treated by RT for cancer in 1997-2010. Radiation therapy utilization rates (RURs) were calculated as (1) the proportion of incident cancer cases who received RT at least once within 1 year of diagnosis (RUR{sub 1Y}); and (2) the proportion who received RT within 5 years of diagnosis (RUR{sub 5Y}). The number of RT treatment courses per incidentmore » cancer case (TCI) was also calculated for all cancer sites combined. The actual RURs were compared with corresponding Australian and Canadian epidemiologic- and evidence-based model estimates and criterion-based benchmark estimates of optimal RURs. The TCIs were compared with TCI estimates from the 1997 Norwegian/National Cancer Plan (NCP). Joinpoint regression was used to identify changes in trends and to estimate annual percentage change (APC) in actual RUR{sub 1Y} and actual TCI. Results: The actual RUR{sub 5Y} (all sites) increased significantly to 29% in 2005 but still differed markedly from the Australian epidemiologic- and evidence-based model estimate of 48%. With the exception of RUR{sub 5Y} for breast cancer and RUR{sub 1Y} for lung cancers, all actual RURs were markedly lower than optimal RUR estimates. The actual TCI increased significantly during the study period, reaching 42.5% in 2010, but was still lower than the 54% recommended in the NCP. The trend for RUR{sub 1Y} (all sites) and TCI changed significantly, with the annual percentage change being largest during the first part of the study period. Conclusions: Utilization rates of RT in Norway increased after the NCP was implemented and RT capacity was increased, but they still seem to be lower than optimal levels.« less

  4. A geographic information system-based method for estimating cancer rates in non-census defined geographical areas.

    PubMed

    Freeman, Vincent L; Boylan, Emma E; Pugach, Oksana; Mclafferty, Sara L; Tossas-Milligan, Katherine Y; Watson, Karriem S; Winn, Robert A

    2017-10-01

    To address locally relevant cancer-related health issues, health departments frequently need data beyond that contained in standard census area-based statistics. We describe a geographic information system-based method for calculating age-standardized cancer incidence rates in non-census defined geographical areas using publically available data. Aggregated records of cancer cases diagnosed from 2009 through 2013 in each of Chicago's 77 census-defined community areas were obtained from the Illinois State Cancer Registry. Areal interpolation through dasymetric mapping of census blocks was used to redistribute populations and case counts from community areas to Chicago's 50 politically defined aldermanic wards, and ward-level age-standardized 5-year cumulative incidence rates were calculated. Potential errors in redistributing populations between geographies were limited to <1.5% of the total population, and agreement between our ward population estimates and those from a frequently cited reference set of estimates was high (Pearson correlation r = 0.99, mean difference = -4 persons). A map overlay of safety-net primary care clinic locations and ward-level incidence rates for advanced-staged cancers revealed potential pathways for prevention. Areal interpolation through dasymetric mapping can estimate cancer rates in non-census defined geographies. This can address gaps in local cancer-related health data, inform health resource advocacy, and guide community-centered cancer prevention and control.

  5. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  6. Detecting symptom exaggeration in combat veterans using the MMPI-2 symptom validity scales: a mixed group validation.

    PubMed

    Tolin, David F; Steenkamp, Maria M; Marx, Brian P; Litz, Brett T

    2010-12-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the application of these scales has been problematic when assessing combat veterans undergoing posttraumatic stress disorder (PTSD) examinations. Mixed group validation (MGV) was employed to determine the efficacy of MMPI-2 exaggeration scales in compensation-seeking (CS) and noncompensation-seeking (NCS) veterans. Unlike CGV, MGV allows for a mix of exaggerating and nonexaggerating individuals in each group, does not require that the exaggeration versus nonexaggerating status of any individual be known, and can be adjusted for different base-rate estimates. MMPI-2 responses of 377 male veterans were examined according to CS versus NCS status. MGV was calculated using 4 sets of base-rate estimates drawn from the literature. The validity scales generally performed well (adequate sensitivity, specificity, and efficiency) under most base-rate estimations, and most produced cutoff scores that showed adequate detection of symptom exaggeration, regardless of base-rate assumptions. These results support the use of MMPI-2 validity scales for PTSD evaluations in veteran populations, even under varying base rates of symptom exaggeration.

  7. First experiences with methods to measure ammonia emissions from naturally ventilated cattle buildings in the U.K.

    NASA Astrophysics Data System (ADS)

    Demmers, T. G. M.; Burgess, L. R.; Short, J. L.; Phillips, V. R.; Clark, J. A.; Wathes, C. M.

    A method has been developed to measure the emission rate of ammonia from naturally ventilated U.K. livestock buildings. The method is based on measurements of ammonia concentration and estimates of the ventilation rate of the building by continuous release of carbon monoxide tracer within the building. The tracer concentration is measured at nine positions in openings around the perimeter of the building, as well as around a ring sampling line. Two criteria were evaluated to decide whether, at any given time, a given opening in the building acted as an air inlet or as an air outlet. Carbon monoxide concentration difference across an opening was found to be a better criterion than the temperature difference across the opening. Ammonia concentrations were measured continuously at the sampling points using a chemiluminescence analyser. The method was applied to a straw-bedded beef unit and to a slurry-based dairy unit. Both buildings were of space-boarded construction. Ventilation rates estimated by the ring line sample were consistently higher than by the perimeter samples. During calm weather, the ventilation estimates by both samples were similar (10-20 air changes h -1). However, during windy conditions (>5 m s -1) the ventilation rate was overestimated by the ring line sample (average 100 air changes h -1) compared to the perimeter samples (average 50 air changes h -1). The difference was caused by incomplete mixing of the tracer within the building. The ventilation rate estimated from the perimeter samples was used for the calculation of the emission rate. Preliminary estimates of the ammonia emission factor were 6.0 kg NH 3 (500 kg live-weight) -1 (190 d) -1 for the slurry-based dairy unit and 3.7 for the straw-bedded beef unit.

  8. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  9. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  10. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Quantifying the effect of a community-based injury prevention program in Queensland using a generalized estimating equation approach.

    PubMed

    Yorkston, Emily; Turner, Catherine; Schluter, Philip J; McClure, Rod

    2007-06-01

    To develop a generalized estimating equation (GEE) model of childhood injury rates to quantify the effectiveness of a community-based injury prevention program implemented in 2 communities in Australia, in order to contribute to the discussion of community-based injury prevention program evaluation. An ecological study was conducted comparing injury rates in two intervention communities in rural and remote Queensland, Australia, with those of 16 control regions. A model of childhood injury was built using hospitalization injury rate data from 1 July 1991 to 30 June 2005 and 16 social variables. The model was built using GEE analysis and was used to estimate parameters and to test the effectiveness of the intervention. When social variables were controlled for, the intervention was associated with a decrease of 0.09 injuries/10,000 children aged 0-4 years (95% CI -0.29 to 0.11) in logarithmically transformed injury rates; however, this decrease was not significant (p = 0.36). The evaluation methods proposed in this study provide a way of determining the effectiveness of a community-based injury prevention program while considering the effect of baseline differences and secular changes in social variables.

  12. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    EPA Science Inventory

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...

  13. Estimated Demand for Michigan's College and University Graduates of 1987.

    ERIC Educational Resources Information Center

    Shingleton, John D.; Scheetz, L. Patrick

    The current job market for 1987 Michigan college graduates was estimated by placement directors and career counselors at 50 Michigan two-year and four-year colleges and universities. The staff rated supply and demand based on information from graduate surveys, employers, and job listings. For each major, the actual ratings are provided of…

  14. Simulated maximum likelihood method for estimating kinetic rates in gene expression.

    PubMed

    Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin

    2007-01-01

    Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.

  15. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  16. Suppression of the cardiopulmonary resuscitation artefacts using the instantaneous chest compression rate extracted from the thoracic impedance.

    PubMed

    Aramendi, E; Ayala, U; Irusta, U; Alonso, E; Eftestøl, T; Kramer-Johansen, J

    2012-06-01

    To demonstrate that the instantaneous chest compression rate can be accurately estimated from the transthoracic impedance (TTI), and that this estimated rate can be used in a method to suppress cardiopulmonary resuscitation (CPR) artefacts. A database of 372 records, 87 shockable and 285 non-shockable, from out-of-hospital cardiac arrest episodes, corrupted by CPR artefacts, was analysed. Each record contained the ECG and TTI obtained from the defibrillation pads and the compression depth (CD) obtained from a sternal CPR pad. The chest compression rates estimated using TTI and CD were compared. The CPR artefacts were then filtered using the instantaneous chest compression rates estimated from the TTI or CD signals. The filtering results were assessed in terms of the sensitivity and specificity of the shock advice algorithm of a commercial automated external defibrillator. The correlation between the mean chest compression rates estimated using TTI or CD was r=0.98 (95% confidence interval, 0.97-0.98). The sensitivity and specificity after filtering using CD were 95.4% (88.4-98.6%) and 87.0% (82.6-90.5%), respectively. The sensitivity and specificity after filtering using TTI were 95.4% (88.4-98.6%) and 86.3% (81.8-89.9%), respectively. The instantaneous chest compression rate can be accurately estimated from TTI. The sensitivity and specificity after filtering are similar to those obtained using the CD signal. Our CPR suppression method based exclusively on signals acquired through the defibrillation pads is as accurate as methods based on signals obtained from CPR feedback devices. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Analysis of temperature profiles for investigating stream losses beneath ephemeral channels

    USGS Publications Warehouse

    Constantz, Jim; Stewart, Amy E.; Niswonger, Richard G.; Sarma, Lisa

    2002-01-01

    Continuous estimates of streamflow are challenging in ephemeral channels. The extremely transient nature of ephemeral streamflows results in shifting channel geometry and degradation in the calibration of streamflow stations. Earlier work suggests that analysis of streambed temperature profiles is a promising technique for estimating streamflow patterns in ephemeral channels. The present work provides a detailed examination of the basis for using heat as a tracer of stream/groundwater exchanges, followed by a description of an appropriate heat and water transport simulation code for ephemeral channels, as well as discussion of several types of temperature analysis techniques to determine streambed percolation rates. Temperature‐based percolation rates for three ephemeral stream sites are compared with available surface water estimates of channel loss for these sites. These results are combined with published results to develop conclusions regarding the accuracy of using vertical temperature profiles in estimating channel losses. Comparisons of temperature‐based streambed percolation rates with surface water‐based channel losses indicate that percolation rates represented 30% to 50% of the total channel loss. The difference is reasonable since channel losses include both vertical and nonvertical component of channel loss as well as potential evapotranspiration losses. The most significant advantage of the use of sediment‐temperature profiles is their robust and continuous nature, leading to a long‐term record of the timing and duration of channel losses and continuous estimates of streambed percolation. The primary disadvantage is that temperature profiles represent the continuous percolation rate at a single point in an ephemeral channel rather than an average seepage loss from the entire channel.

  18. Probabilistic estimation of residential air exchange rates for ...

    EPA Pesticide Factsheets

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  19. Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System

    NASA Technical Reports Server (NTRS)

    Karlgaard, Chris; Schoenenberger, Mark

    2017-01-01

    This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.

  20. Do fungi need to be included within environmental radiation protection assessment models?

    PubMed

    Guillén, J; Baeza, A; Beresford, N A; Wood, M D

    2017-09-01

    Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    DOE PAGES

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...

    2016-02-24

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less

  2. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.

    2004-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).

  3. Double neutron stars: merger rates revisited

    NASA Astrophysics Data System (ADS)

    Chruslinska, Martyna; Belczynski, Krzysztof; Klencki, Jakub; Benacquista, Matthew

    2018-03-01

    We revisit double neutron star (DNS) formation in the classical binary evolution scenario in light of the recent Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo DNS detection (GW170817). The observationally estimated Galactic DNS merger rate of R_MW = 21^{+28}_{-14} Myr-1, based on three Galactic DNS systems, fully supports our standard input physics model with RMW = 24 Myr-1. This estimate for the Galaxy translates in a non-trivial way (due to cosmological evolution of progenitor stars in chemically evolving Universe) into a local (z ≈ 0) DNS merger rate density of Rlocal = 48 Gpc-3 yr-1, which is not consistent with the current LIGO/Virgo DNS merger rate estimate (1540^{+3200}_{-1220} Gpc-3 yr-1). Within our study of the parameter space, we find solutions that allow for DNS merger rates as high as R_local ≈ 600^{+600}_{-300} Gpc-3 yr-1 which are thus consistent with the LIGO/Virgo estimate. However, our corresponding BH-BH merger rates for the models with high DNS merger rates exceed the current LIGO/Virgo estimate of local BH-BH merger rate (12-213 Gpc-3 yr-1). Apart from being particularly sensitive to the common envelope treatment, DNS merger rates are rather robust against variations of several of the key factors probed in our study (e.g. mass transfer, angular momentum loss, and natal kicks). This might suggest that either common envelope development/survival works differently for DNS (˜10-20 M⊙ stars) than for BH-BH (˜40-100 M⊙ stars) progenitors, or high black hole (BH) natal kicks are needed to meet observational constraints for both types of binaries. Our conclusion is based on a limited number of (21) evolutionary models and is valid within this particular DNS and BH-BH isolated binary formation scenario.

  4. Evaluating the Impact of Zimbabwe's Prevention of Mother-to-Child HIV Transmission Program: Population-Level Estimates of HIV-Free Infant Survival Pre-Option A.

    PubMed

    Buzdugan, Raluca; McCoy, Sandra I; Watadzaushe, Constancia; Kang Dufour, Mi-Suk; Petersen, Maya; Dirawo, Jeffrey; Mushavi, Angela; Mujuru, Hilda Angela; Mahomva, Agnes; Musarandega, Reuben; Hakobyan, Anna; Mugurungi, Owen; Cowan, Frances M; Padian, Nancy S

    2015-01-01

    We estimated HIV-free infant survival and mother-to-child HIV transmission (MTCT) rates in Zimbabwe, some of the first community-based estimates from a UNAIDS priority country. In 2012 we surveyed mother-infant pairs residing in the catchment areas of 157 health facilities randomly selected from 5 of 10 provinces in Zimbabwe. Enrolled infants were born 9-18 months before the survey. We collected questionnaires, blood samples for HIV testing, and verbal autopsies for deceased mothers/infants. Estimates were assessed among i) all HIV-exposed infants, as part of an impact evaluation of Option A of the 2010 WHO guidelines (rolled out in Zimbabwe in 2011), and ii) the subgroup of infants unexposed to Option A. We compared province-level MTCT rates measured among women in the community with MTCT rates measured using program monitoring data from facilities serving those communities. Among 8568 women with known HIV serostatus, 1107 (12.9%) were HIV-infected. Among all HIV-exposed infants, HIV-free infant survival was 90.9% (95% confidence interval (CI): 88.7-92.7) and MTCT was 8.8% (95% CI: 6.9-11.1). Sixty-six percent of HIV-exposed infants were still breastfeeding. Among the 762 infants born before Option A was implemented, 90.5% (95% CI: 88.1-92.5) were alive and HIV-uninfected at 9-18 months of age, and 9.1% (95%CI: 7.1-11.7) were HIV-infected. In four provinces, the community-based MTCT rate was higher than the facility-based MTCT rate. In Harare, the community and facility-based rates were 6.0% and 9.1%, respectively. By 2012 Zimbabwe had made substantial progress towards the elimination of MTCT. Our HIV-free infant survival and MTCT estimates capture HIV transmissions during pregnancy, delivery and breastfeeding regardless of whether or not mothers accessed health services. These estimates also provide a baseline against which to measure the impact of Option A guidelines (and subsequently Option B+).

  5. Evaluating the Impact of Zimbabwe’s Prevention of Mother-to-Child HIV Transmission Program: Population-Level Estimates of HIV-Free Infant Survival Pre-Option A

    PubMed Central

    Buzdugan, Raluca; McCoy, Sandra I.; Watadzaushe, Constancia; Kang Dufour, Mi-Suk; Petersen, Maya; Dirawo, Jeffrey; Mushavi, Angela; Mujuru, Hilda Angela; Mahomva, Agnes; Musarandega, Reuben; Hakobyan, Anna; Mugurungi, Owen; Cowan, Frances M.; Padian, Nancy S.

    2015-01-01

    Objective We estimated HIV-free infant survival and mother-to-child HIV transmission (MTCT) rates in Zimbabwe, some of the first community-based estimates from a UNAIDS priority country. Methods In 2012 we surveyed mother-infant pairs residing in the catchment areas of 157 health facilities randomly selected from 5 of 10 provinces in Zimbabwe. Enrolled infants were born 9–18 months before the survey. We collected questionnaires, blood samples for HIV testing, and verbal autopsies for deceased mothers/infants. Estimates were assessed among i) all HIV-exposed infants, as part of an impact evaluation of Option A of the 2010 WHO guidelines (rolled out in Zimbabwe in 2011), and ii) the subgroup of infants unexposed to Option A. We compared province-level MTCT rates measured among women in the community with MTCT rates measured using program monitoring data from facilities serving those communities. Findings Among 8568 women with known HIV serostatus, 1107 (12.9%) were HIV-infected. Among all HIV-exposed infants, HIV-free infant survival was 90.9% (95% confidence interval (CI): 88.7–92.7) and MTCT was 8.8% (95% CI: 6.9–11.1). Sixty-six percent of HIV-exposed infants were still breastfeeding. Among the 762 infants born before Option A was implemented, 90.5% (95% CI: 88.1–92.5) were alive and HIV-uninfected at 9–18 months of age, and 9.1% (95%CI: 7.1–11.7) were HIV-infected. In four provinces, the community-based MTCT rate was higher than the facility-based MTCT rate. In Harare, the community and facility-based rates were 6.0% and 9.1%, respectively. Conclusion By 2012 Zimbabwe had made substantial progress towards the elimination of MTCT. Our HIV-free infant survival and MTCT estimates capture HIV transmissions during pregnancy, delivery and breastfeeding regardless of whether or not mothers accessed health services. These estimates also provide a baseline against which to measure the impact of Option A guidelines (and subsequently Option B+). PMID:26248197

  6. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  7. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  8. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    DTIC Science & Technology

    2017-12-01

    values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression

  9. Estimation of community-level influenza-associated illness in a low resource rural setting in India.

    PubMed

    Saha, Siddhartha; Gupta, Vivek; Dawood, Fatimah S; Broor, Shobha; Lafond, Kathryn E; Chadha, Mandeep S; Rai, Sanjay K; Krishnan, Anand

    2018-01-01

    To estimate rates of community-level influenza-like-illness (ILI) and influenza-associated ILI in rural north India. During 2011, we conducted household-based healthcare utilization surveys (HUS) for any acute medical illness (AMI) in preceding 14days among residents of 28villages of Ballabgarh, in north India. Concurrently, we conducted clinic-based surveillance (CBS) in the area for AMI episodes with illness onset ≤3days and collected nasal and throat swabs for influenza virus testing using real-time polymerase chain reaction. Retrospectively, we applied ILI case definition (measured/reported fever and cough) to HUS and CBS data. We attributed 14days of risk-time per person surveyed in HUS and estimated community ILI rate by dividing the number of ILI cases in HUS by total risk-time. We used CBS data on influenza positivity and applied it to HUS-based community ILI rates by age, month, and clinic type, to estimate the community influenza-associated ILI rates. The HUS of 69,369 residents during the year generated risk-time of 3945 person-years (p-y) and identified 150 (5%, 95%CI: 4-6) ILI episodes (38 ILI episodes/1,000 p-y; 95% CI 32-44). Among 1,372 ILI cases enrolled from clinics, 126 (9%; 95% CI 8-11) had laboratory-confirmed influenza (A (H3N2) = 72; B = 54). After adjusting for age, month, and clinic type, overall influenza-associated ILI rate was 4.8/1,000 p-y; rates were highest among children <5 years (13; 95% CI: 4-29) and persons≥60 years (11; 95%CI: 2-30). We present a novel way to use HUS and CBS data to generate estimates of community burden of influenza. Although the confidence intervals overlapped considerably, higher point estimates for burden among young children and older adults shows the utility for exploring the value of influenza vaccination among target groups.

  10. Inferring invasive species abundance using removal data from management actions.

    PubMed

    Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M

    2016-10-01

    Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.

  11. The effect of rate denominator source on US fatal occupational injury rate estimates.

    PubMed

    Richardson, David; Loomis, Dana; Bailer, A John; Bena, James

    2004-09-01

    The Current Population Survey (CPS) is often used as a source of denominator information for analyses of US fatal occupational injury rates. However, given the relatively small sample size of the CPS, analyses that examine the cross-classification of occupation or industry with demographic or geographic characteristics will often produce highly imprecise rate estimates. The Decennial Census of Population provides an alternative source for rate denominator information. We investigate the comparability of fatal injury rates derived using these two sources of rate denominator information. Information on fatal occupational injuries that occurred between January 1, 1983 and December 31, 1994 was obtained from the National Traumatic Occupational Fatality surveillance system. Annual estimates of employment by occupation, industry, age, and sex were derived from the CPS, and by linear interpolation and extrapolation from the 1980 and 1990 Census of Population. Fatal injury rates derived using these denominator data were compared. Fatal injury rates calculated using Census-based denominator data were within 10% of rates calculated using CPS data for all major occupation groups except farming/forestry/fishing, for which the fatal injury rate calculated using Census-based denominator data was 24.69/100,000 worker-years and the rate calculated using CPS data was 19.97/100,000 worker-years. The choice of denominator data source had minimal influence on estimates of trends over calendar time in the fatal injury rates for most major occupation and industry groups. The Census offers a reasonable source for deriving fatal injury rate denominator data in situations where the CPS does not provide sufficiently precise data, although the Census may underestimate the population-at-risk in some industries as a consequence of seasonal variation in employment.

  12. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  13. Estimating glomerular filtration rate (GFR) in children. The average between a cystatin C- and a creatinine-based equation improves estimation of GFR in both children and adults and enables diagnosing Shrunken Pore Syndrome.

    PubMed

    Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders

    2017-09-01

    Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.

  14. A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.

    NASA Astrophysics Data System (ADS)

    Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.

    2017-12-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.

  15. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  16. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    PubMed

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  17. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  18. Autonomous navigation system based on GPS and magnetometer data

    NASA Technical Reports Server (NTRS)

    Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)

    2004-01-01

    This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.

  19. Mathematical model of cycad cones' thermogenic temperature responses: inverse calorimetry to estimate metabolic heating rates.

    PubMed

    Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I

    2012-12-21

    A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. 14 CFR Appendix A to Part 1215 - Estimated Service Rates in 1997 Dollars for TDRSS Standard Services (Based on NASA Escalation...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND DATA RELAY SATELLITE SYSTEM (TDRSS) Pt..., return telemetry, or tracking, or any combination of these, the base rate is $184.00 per minute for non-U.S. Government users. 2. Multiple Access Forward Service—Base rate is $42.00 per minute for non-U.S...

  1. Calibrating recruitment estimates for mourning doves from harvest age ratios

    USGS Publications Warehouse

    Miller, David A.; Otis, David L.

    2010-01-01

    We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.

  2. New Constraints on Models for Time-Variable Displacement Rates on the San Jacinto Fault Zone, Southern California

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Bennett, R.; Matti, J.

    2004-12-01

    Existing geodetic, geomorphic, and geologic studies yield apparently conflicting estimates of fault displacement rates over the last 1.5 m.y. in the greater San Andreas fault (SAF) system of southern California. Do these differences reflect biases in one or more of the inference methods, or is fault displacement really temporally variable? Arguments have been presented for both cases. We investigate the plausibility of variable-rate fault models by combining basin deposit provenance, fault trenching, seismicity, gravity, and magnetic data sets from the San Bernardino basin. These data allow us to trace the path and broad timing of strike-slip fault displacements in buried basement rocks, which in turn allows us to test weather variable-fault rate models fit the displacement path and rate data through the basin. The San Bernardino basin lies between the San Jacinto fault (SJF) and the SAF. Isostatic gravity signatures show a 2 km deep graben centered directly over the modern strand of the SJF, whereas the basin is shallow and a-symmetric next to the SAF. This observation indicates that stresses necessary to create the basin have been centered on the SJF for most of the basin's history. Linear magnetic anomalies, used as geologic markers, are offset ˜25 km across the northernmost strands of the SJF, which matches offset estimations south of the basin. These offset anomalies indicate that the SJF and SAF are discrete fault systems that do not directly interact south of the San Gabriel Mountains, therefore spatial slip variability combined with sparse sampling cannot explain the conflicting rate data. Furthermore, analyses of basin deposits indicate that movement on the SJF began between 1.3 to1.5 Ma, yielding an over-all average displacement rate in the range of 17 to 19 mm/yr, which is higher than some shorter-term estimates based on geodesy and geomorphology. Average displacement rates over this same time period for the San Bernardino strand of the SAF, on the other hand, are inferred to be low, consistent with some recent short-term estimates based on geodesy, but in contrast with estimates based on geomorphology. We conclude that either published estimates for the short-term SJF displacement rate do not accurately reflect the full SJF rate, or that the SJF rate has decreased over time, with implications for rate changes on other faults in the region. We explore the latter explanation with models for time-variable displacement rate for the greater SAF system that satisfy all existing data.

  3. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    PubMed

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  4. Subtraction by Distraction: Publishing Value-Added Estimates of Teachers by Name Hinders Education Reform

    ERIC Educational Resources Information Center

    Epstein, Diana; Miller, Raegen T.

    2011-01-01

    In August 2010 the "Los Angeles Times" published a special report on their website featuring performance ratings for nearly 6,000 Los Angeles Unified School District teachers. The move was controversial because the ratings were based on so-called value-added estimates of teachers' contributions to student learning. As with most…

  5. Estimating rates of local species extinction, colonization and turnover in animal communities

    USGS Publications Warehouse

    Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.

    1998-01-01

    Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.

  6. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  8. Minimum area requirements for an at-risk butterfly based on movement and demography.

    PubMed

    Brown, Leone M; Crone, Elizabeth E

    2016-02-01

    Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.

  9. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  10. Estimating sensitivity and specificity for technology assessment based on observer studies.

    PubMed

    Nishikawa, Robert M; Pesce, Lorenzo L

    2013-07-01

    The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  11. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  12. Incidence of Cushing's syndrome and Cushing's disease in commercially-insured patients <65 years old in the United States.

    PubMed

    Broder, Michael S; Neary, Maureen P; Chang, Eunice; Cherepanov, Dasha; Ludlam, William H

    2015-06-01

    To estimate the incidence of Cushing's syndrome (CS) and Cushing's disease (CD) in the US. MarketScan Commercial database 2007-2010 (age <65 years) was used. CS patients were defined with ≥2 claims of CS diagnosis, while CD patients were defined with CS plus a benign pituitary adenoma diagnosis or hypophysectomy in the same calendar year. Annual incidence was calculated by dividing the number of CS or CD cases by the total number of members with the same enrollment requirement during the calendar years. CS incidence rates per million person-years were 48.6 in 2009 and 39.5 in 2010. The lowest rates of CS were in ≤17-year-olds and highest rates were in 35 to 44-year-olds. CD incidence rates were 7.6 in 2009 and 6.2 in 2010. The lowest rates of CD were in ≤17-year-olds and highest rates were in 18 to 24-year-olds. The rates varied by sex (2.3-2.7 in males, 9.8-12.1 in females). In females, lowest rates ranged 2.5-4.0 in ≤17-year-olds and highest 16.7-27.2 in 18-24 year olds. In males, there were too few cases to report estimates by age. In the first large US-based study, the annual incidence of CS in individuals <65 years old was nearly 49 cases per million, substantially higher than previous estimates, which were based primarily on European data. Using similar methods, we estimated the incidence of CD at nearly 8 cases per million US population. These estimates, if confirmed in other epidemiologic databases, represent a new data reference in these rare conditions.

  13. Assessment of spatial variation of risks in small populations.

    PubMed Central

    Riggan, W B; Manton, K G; Creason, J P; Woodbury, M A; Stallard, E

    1991-01-01

    Often environmental hazards are assessed by examining the spatial variation of disease-specific mortality or morbidity rates. These rates, when estimated for small local populations, can have a high degree of random variation or uncertainty associated with them. If those rate estimates are used to prioritize environmental clean-up actions or to allocate resources, then those decisions may be influenced by this high degree of uncertainty. Unfortunately, the effect of this uncertainty is not to add "random noise" into the decision-making process, but to systematically bias action toward the smallest populations where uncertainty is greatest and where extreme high and low rate deviations are most likely to be manifest by chance. We present a statistical procedure for adjusting rate estimates for differences in variability due to differentials in local area population sizes. Such adjustments produce rate estimates for areas that have better properties than the unadjusted rates for use in making statistically based decisions about the entire set of areas. Examples are provided for county variation in bladder, stomach, and lung cancer mortality rates for U.S. white males for the period 1970 to 1979. PMID:1820268

  14. Integrating acoustic telemetry into mark-recapture models to improve the precision of apparent survival and abundance estimates.

    PubMed

    Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam

    2015-07-01

    Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.

  15. Impact of biology knowledge on the conservation and management of large pelagic sharks.

    PubMed

    Yokoi, Hiroki; Ijima, Hirotaka; Ohshimo, Seiji; Yokawa, Kotaro

    2017-09-06

    Population growth rate, which depends on several biological parameters, is valuable information for the conservation and management of pelagic sharks, such as blue and shortfin mako sharks. However, reported biological parameters for estimating the population growth rates of these sharks differ by sex and display large variability. To estimate the appropriate population growth rate and clarify relationships between growth rate and relevant biological parameters, we developed a two-sex age-structured matrix population model and estimated the population growth rate using combinations of biological parameters. We addressed elasticity analysis and clarified the population growth rate sensitivity. For the blue shark, the estimated median population growth rate was 0.384 with a range of minimum and maximum values of 0.195-0.533, whereas those values of the shortfin mako shark were 0.102 and 0.007-0.318, respectively. The maturity age of male sharks had the largest impact for blue sharks, whereas that of female sharks had the largest impact for shortfin mako sharks. Hypotheses for the survival process of sharks also had a large impact on the population growth rate estimation. Both shark maturity age and survival rate were based on ageing validation data, indicating the importance of validating the quality of these data for the conservation and management of large pelagic sharks.

  16. Decoy-state quantum key distribution with more than three types of photon intensity pulses

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2018-04-01

    The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.

  17. Development of a method to rate the primary safety of vehicles using linked New Zealand crash and vehicle licensing data.

    PubMed

    Keall, Michael D; Newstead, Stuart

    2016-01-01

    Vehicle safety rating systems aim firstly to inform consumers about safe vehicle choices and, secondly, to encourage vehicle manufacturers to aspire to safer levels of vehicle performance. Primary rating systems (that measure the ability of a vehicle to assist the driver in avoiding crashes) have not been developed for a variety of reasons, mainly associated with the difficult task of disassociating driver behavior and vehicle exposure characteristics from the estimation of crash involvement risk specific to a given vehicle. The aim of the current study was to explore different approaches to primary safety estimation, identifying which approaches (if any) may be most valid and most practical, given typical data that may be available for producing ratings. Data analyzed consisted of crash data and motor vehicle registration data for the period 2003 to 2012: 21,643,864 observations (representing vehicle-years) and 135,578 crashed vehicles. Various logistic models were tested as a means to estimate primary safety: Conditional models (conditioning on the vehicle owner over all vehicles owned); full models not conditioned on the owner, with all available owner and vehicle data; reduced models with few variables; induced exposure models; and models that synthesised elements from the latter two models. It was found that excluding young drivers (aged 25 and under) from all primary safety estimates attenuated some high risks estimated for make/model combinations favored by young people. The conditional model had clear biases that made it unsuitable. Estimates from a reduced model based just on crash rates per year (but including an owner location variable) produced estimates that were generally similar to the full model, although there was more spread in the estimates. The best replication of the full model estimates was generated by a synthesis of the reduced model and an induced exposure model. This study compared approaches to estimating primary safety that could mimic an analysis based on a very rich data set, using variables that are commonly available when registered fleet data are linked to crash data. This exploratory study has highlighted promising avenues for developing primary safety rating systems for vehicle makes and models.

  18. Recreational Stream Crossing Effects on Sediment Delivery and Macroinvertebrates in Southwestern Virginia, USA

    NASA Astrophysics Data System (ADS)

    Kidd, Kathryn R.; Aust, W. Michael; Copenheaver, Carolyn A.

    2014-09-01

    Trail-based recreation has increased over recent decades, raising the environmental management issue of soil erosion that originates from unsurfaced, recreational trail systems. Trail-based soil erosion that occurs near stream crossings represents a non-point source of pollution to streams. We modeled soil erosion rates along multiple-use (hiking, mountain biking, and horseback riding) recreational trails that approach culvert and ford stream crossings as potential sources of sediment input and evaluated whether recreational stream crossings were impacting water quality based on downstream changes in macroinvertebrate-based indices within the Poverty Creek Trail System of the George Washington and Jefferson National Forest in southwestern Virginia, USA. We found modeled soil erosion rates for non-motorized recreational approaches that were lower than published estimates for an off-road vehicle approach, bare horse trails, and bare forest operational skid trail and road approaches, but were 13 times greater than estimated rates for undisturbed forests and 2.4 times greater than a 2-year old clearcut in this region. Estimated soil erosion rates were similar to rates for skid trails and horse trails where best management practices (BMPs) had been implemented. Downstream changes in macroinvertebrate-based indices indicated water quality was lower downstream from crossings than in upstream reference reaches. Our modeled soil erosion rates illustrate recreational stream crossing approaches have the potential to deliver sediment into adjacent streams, particularly where BMPs are not being implemented or where approaches are not properly managed, and as a result can negatively impact water quality below stream crossings.

  19. Analysis of urban-rural population dynamics for China.

    PubMed

    Shen, J

    1991-12-01

    The population dynamics of China are presented in a multiregional demographic model using regional estimates or mortality and migration based on the 1% population sample survey in 1987. An open ended population account is generated for period cohort a, gender g of region i (urban) and j (rural) using population, birth, death, and migration. Demographic rates and equations for flows of nonsurviving migrants of period cohort a of gender g are estimated using the forward demographic rate definition. Out-migration rates for period cohort a of gender g are defined by migration flow divided by the initial population. The death rate for period cohort A1 and A are estimated using a single region method. Death and migration rates are simultaneously estimated with an iterative procedure. The population accounts estimates and demographic rates are provided for the period ending 1986-87 for male births, males in period cohorts 10 and 20, female births, and females in period cohorts 10 and 20. The urban and rural population projection model is based on the population accounts concept and assumes fixed rates of mortality, migration, and normal fertility for the base year 1987. The results of this projection are a population of 1090 million that will grow to 1304 million in 2000, 1720 million in 2050, and 1791 million in 2087. Urban population will expand from 44.2% in 1988 to 46.6% in 2000, and 54.7% in 2087. The labor population of males 18-65 years and females 18-60 years will increase from 58.8% in 1988 to 59.7% in 2000 and decline to 58.4% by 2087. The old age population of males 65 years and females 60 years will increase from 6.5% in 1988 to 7.9% in 2000, and 16.3% in 2087. The mean age increased from 28.3 years in 1988 to 37 in 2087. Urban population may be underprojected; migration problems are recognized. Fertility also is likely to decline. An alternative projection (B) is given to account for the U-shape distribution and urban fertility of 1.8 in 2000, increasing to and stabilizing at 2.2 in 2020, such that population estimates for 2000 are 1291 and 1524 for 2087 with a peak in 2048 of 1573. A faster fertility decline is also used to generate projection C. The author's projections A, B, and C, which are based on more recent data and a more realistic model, are than the "objective projection" and than the "warning projection" generated by China's Population Census Office based on 1982 census data.

  20. A Bayesian Hierarchical Modeling Scheme for Estimating Erosion Rates Under Current Climate Conditions

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2014-12-01

    Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.

  1. Probabilistic reanalysis of twentieth-century sea-level rise.

    PubMed

    Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X

    2015-01-22

    Estimating and accounting for twentieth-century global mean sea level (GMSL) rise is critical to characterizing current and future human-induced sea-level change. Several previous analyses of tide gauge records--employing different methods to accommodate the spatial sparsity and temporal incompleteness of the data and to constrain the geometry of long-term sea-level change--have concluded that GMSL rose over the twentieth century at a mean rate of 1.6 to 1.9 millimetres per year. Efforts to account for this rate by summing estimates of individual contributions from glacier and ice-sheet mass loss, ocean thermal expansion, and changes in land water storage fall significantly short in the period before 1990. The failure to close the budget of GMSL during this period has led to suggestions that several contributions may have been systematically underestimated. However, the extent to which the limitations of tide gauge analyses have affected estimates of the GMSL rate of change is unclear. Here we revisit estimates of twentieth-century GMSL rise using probabilistic techniques and find a rate of GMSL rise from 1901 to 1990 of 1.2 ± 0.2 millimetres per year (90% confidence interval). Based on individual contributions tabulated in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, this estimate closes the twentieth-century sea-level budget. Our analysis, which combines tide gauge records with physics-based and model-derived geometries of the various contributing signals, also indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records.The increase in rate relative to the 1901-90 trend is accordingly larger than previously thought; this revision may affect some projections of future sea-level rise.

  2. Rates of projected climate change dramatically exceed past rates of climatic niche evolution among vertebrate species.

    PubMed

    Quintero, Ignacio; Wiens, John J

    2013-08-01

    A key question in predicting responses to anthropogenic climate change is: how quickly can species adapt to different climatic conditions? Here, we take a phylogenetic approach to this question. We use 17 time-calibrated phylogenies representing the major tetrapod clades (amphibians, birds, crocodilians, mammals, squamates, turtles) and climatic data from distributions of > 500 extant species. We estimate rates of change based on differences in climatic variables between sister species and estimated times of their splitting. We compare these rates to predicted rates of climate change from 2000 to 2100. Our results are striking: matching projected changes for 2100 would require rates of niche evolution that are > 10,000 times faster than rates typically observed among species, for most variables and clades. Despite many caveats, our results suggest that adaptation to projected changes in the next 100 years would require rates that are largely unprecedented based on observed rates among vertebrate species. © 2013 John Wiley & Sons Ltd/CNRS.

  3. What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries

    PubMed Central

    Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B

    2011-01-01

    Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332

  4. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  5. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE PAGES

    Fee, David; Izbekov, Pavel; Kim, Keehoon; ...

    2017-10-09

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  6. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  7. Heart rate prediction for coronary artery disease patients (CAD): Results of a clinical pilot study.

    PubMed

    Müller-von Aschwege, Frerk; Workowski, Anke; Willemsen, Detlev; Müller, Sebastian M; Hein, Andreas

    2015-01-01

    This paper describes the results of a pilot study with cardiac patients based on information that can be derived from a smartphone. The idea behind the study is to design a model for estimating the heart rate of a patient before an outdoor walking session for track planning, as well as using the model for guidance during an outdoor session. The model allows estimation of the heart rate several minutes in advance to guide the patient and avoid overstrain before its occurrence. This paper describes the first results of the clinical pilot study with cardiac patients taking β-blockers. 9 patients have been tested on a treadmill and during three outdoor sessions each. The results have been derived and three levels of improvement have been tested by cross validation. The overall result is an average Median Absolute Deviation (MAD) of 4.26 BPM between measured heart rate and smartphone sensor based model estimation.

  8. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fee, David; Izbekov, Pavel; Kim, Keehoon

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  9. Genomics meets applied ecology: Characterizing habitat quality for sloths in a tropical agroecosystem.

    PubMed

    Fountain, Emily D; Kang, Jung Koo; Tempel, Douglas J; Palsbøll, Per J; Pauli, Jonathan N; Zachariah Peery, M

    2018-01-01

    Understanding how habitat quality in heterogeneous landscapes governs the distribution and fitness of individuals is a fundamental aspect of ecology. While mean individual fitness is generally considered a key to assessing habitat quality, a comprehensive understanding of habitat quality in heterogeneous landscapes requires estimates of dispersal rates among habitat types. The increasing accessibility of genomic approaches, combined with field-based demographic methods, provides novel opportunities for incorporating dispersal estimation into assessments of habitat quality. In this study, we integrated genomic kinship approaches with field-based estimates of fitness components and approximate Bayesian computation (ABC) procedures to estimate habitat-specific dispersal rates and characterize habitat quality in two-toed sloths (Choloepus hoffmanni) occurring in a Costa Rican agricultural ecosystem. Field-based observations indicated that birth and survival rates were similar in a sparsely shaded cacao farm and adjacent cattle pasture-forest mosaic. Sloth density was threefold higher in pasture compared with cacao, whereas home range size and overlap were greater in cacao compared with pasture. Dispersal rates were similar between the two habitats, as estimated using ABC procedures applied to the spatial distribution of pairs of related individuals identified using 3,431 single nucleotide polymorphism and 11 microsatellite locus genotypes. Our results indicate that crops produced under a sparse overstorey can, in some cases, constitute lower-quality habitat than pasture-forest mosaics for sloths, perhaps because of differences in food resources or predator communities. Finally, our study demonstrates that integrating field-based demographic approaches with genomic methods can provide a powerful means for characterizing habitat quality for animal populations occurring in heterogeneous landscapes. © 2017 John Wiley & Sons Ltd.

  10. Crude and intrinsic birth rates for Asian countries.

    PubMed

    Rele, J R

    1978-01-01

    An attempt to estimate birth rates for Asian countries. The main sources of information in developing countries has been census age-sex distribution, although inaccuracies in the basic data have made it difficult to reach a high degree of accuracy. Different methods bring widely varying results. The methodology presented here is based on the use of the conventional child-woman ratio from the census age-sex distribution, with a rough estimate of the expectation of life at birth. From the established relationships between child-woman ratio and the intrinsic birth rate of the nature y = a + bx + cx(2) at each level of life expectation, the intrinsic birth rate is first computed using coefficients already computed. The crude birth rate is obtained using the adjustment based on the census age-sex distribution. An advantage to this methodology is that the intrinsic birth rate, normally an involved computation, can be obtained relatively easily as a biproduct of the crude birth rates and the bases for the calculations for each of 33 Asian countries, in some cases over several time periods.

  11. Estimates of reservoir methane emissions based on a spatially balanced probabilistic-survey

    EPA Science Inventory

    Global estimates of methane (CH4) emissions from reservoirs are poorly constrained, partly due to the challenges of accounting for intra-reservoir spatial variability. Reservoir-scale emission rates are often estimated by extrapolating from measurement made at a few locations; h...

  12. Menstrual versus clinical estimate of gestational age dating in the United States: temporal trends and variability in indices of perinatal outcomes.

    PubMed

    Ananth, Cande V

    2007-09-01

    Accurate estimation of gestational age early in pregnancy is paramount for obstetric care decisions and for determining fetal growth and other conditions that may necessitate timing the iatrogenic intervention or delivery. We sought to examine temporal changes in the distributions of two measures of gestational age, namely, those based on menstrual dating and a clinical estimate. We further sought to evaluate relative comparisons and variability in indices of perinatal outcomes. We utilised the Natality data files in the US, 1990-2002 comprising women that delivered a singleton livebirth between 22 and 44 weeks gestation (n = 42 689 603). Changes were shown in the distributions of gestational age based on menstrual vs. clinical estimate between 1990 and 2002, as well as changes in the proportions of preterm (<37, <32 and <28 weeks) and post-term (>or=42 weeks) birth, and small- (SGA; <10th percentile) and large-for-gestational-age (LGA; birthweight >90th percentile) births. While the absolute rates of preterm birth <37 weeks, SGA and LGA births were lower based on the clinical estimate of gestational age relative to that based on menstrual dating, the increases in preterm birth rate between 1990 and 2002 were fairly similar between the two measures of gestational dating. However, the decline in post-term births was larger, based on the clinical estimate (-73.8%), than on the menstrual estimate (-36.6%) between 1990 and 2002. While the clinical estimate of gestational age appears to provide a reasonably good approximation to the menstrual estimate, disregarding the clinical estimate of gestational age may ignore the advantages of gestational age assessment in modern obstetrics.

  13. The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.

    PubMed

    Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng

    2014-07-01

    Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  14. Hubble Space Telescope Star Tracker ad Two-Gyro Control Law Design, Implementation, and On-Orbit Performance. Part 2

    NASA Technical Reports Server (NTRS)

    VanArsdall, John C.

    2005-01-01

    The Hubble Space Telescope (HST) normally requires three gyroscopes for three-axis rate control. The loss of the Space Shuttle Columbia on STS-107 resulted in the cancellation of a shuttle-based HST Servicing Mission 4. Therefore, HST must operate using the on-board hardware until an alternate means of servicing can be accomplished. The probability of gyro failure indicates that fewer than three gyros will be operable before any servicing mission can be performe& To mitigate this, and to extend the HST life expectancy, a rate estimation and control algorithm was developed that requires two gyros to measure rate about two axes, with the remaining axis rate estimated using one of three alternate sensors. Three-axis magnetometers (MSS) are used for coarse rate estimation during large maneuvers and during occultations of other sensors. Fixed-Head Star Trackers (FHSTs) are used for rate estimation during safe mode recovery and during transition to science operations. Fine rate estimation during science operations is performed using the Fine Guidance Sensors (FGSs). The FHST mode (T2G) relies on star vectors as measured by the FHSTs to estimate vehicle rate about the axis not measured by the gyros. Since the FHSTs were not designed to estimate body rate, this method involves a unique set of problems that had to be overcome in the final design, such as the effect of FHST break tracks and moving targets on rate estimation. The solutions to these problems, as well as a detailed description of the design and implementation of the rate estimation are presented Also included are the time domain and frequency domain analysis of the T2G control law. A high fidelity HST simulator (HSTSIM) was used to verify T2G performance prior to on-orbit use. Results of these simulations are also presented. Finally, analysis of actual T2G on-orbit test results is presented for design validation.

  15. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  16. Growth status and estimated growth rate of youth football players: a community-based study.

    PubMed

    Malina, Robert M; Morano, Peter J; Barron, Mary; Miller, Susan J; Cumming, Sean P

    2005-05-01

    To characterize the growth status of participants in community-sponsored youth football programs and to estimate rates of growth in height and weight. Mixed-longitudinal over 2 seasons. Two communities in central Michigan. Members of 33 youth football teams in 2 central Michigan communities in the 2000 and 2001 seasons (Mid-Michigan PONY Football League). Height and weight of all participants were measured prior to each season, 327 in 2000 and 326 in 2001 (n = 653). The body mass index (kg/m) was calculated. Heights and weights did not differ from season to season and between the communities; the data were pooled and treated cross-sectionally. Increments of growth in height and weight were estimated for 166 boys with 2 measurements approximately 1 year apart to provide an estimate of growth rate. Growth status (size-attained) of youth football players relative to reference data (CDC) for American boys and estimated growth rate relative to reference values from 2 longitudinal studies of American boys. Median heights of youth football players approximate the 75th percentiles, while median weights approximate the 75th percentiles through 11 years and then drift toward the 90th percentiles of the reference. Median body mass indexes of youth football players fluctuate about the 85th percentiles of the reference. Estimated growth rates in height approximate the reference and may suggest earlier maturation, while estimated growth rates in weight exceed the reference. Youth football players are taller and especially heavier than reference values for American boys. Estimated rates of growth in height approximate medians for American boys and suggest earlier maturation. Estimated rates of growth in weight exceed those of the reference and may place many youth football players at risk for overweight/obesity, which in turn may be a risk factor for injury.

  17. A New Approach for Estimating Entrainment Rate in Cumulus Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu C.; Liu, Y.; Yum, S. S.

    2012-02-16

    A new approach is presented to estimate entrainment rate in cumulus clouds. The new approach is directly derived from the definition of fractional entrainment rate and relates it to mixing fraction and the height above cloud base. The results derived from the new approach compare favorably with those obtained with a commonly used approach, and have smaller uncertainty. This new approach has several advantages: it eliminates the need for in-cloud measurements of temperature and water vapor content, which are often problematic in current aircraft observations; it has the potential for straightforwardly connecting the estimation of entrainment rate and the microphysicalmore » effects of entrainment-mixing processes; it also has the potential for developing a remote sensing technique to infer entrainment rate.« less

  18. Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.

  19. Serious fungal infections in Pakistan.

    PubMed

    Jabeen, K; Farooqi, J; Mirza, S; Denning, D; Zafar, A

    2017-06-01

    The true burden of fungal infection in Pakistan is unknown. High-risk populations for fungal infections [tuberculosis (TB), diabetes, chronic respiratory diseases, asthma, cancer, transplant and human immunodeficiency virus (HIV) infection] are numerous. Here, we estimate the burden of fungal infections to highlight their public health significance. Whole and at-risk population estimates were obtained from the WHO (TB), BREATHE study (COPD), UNAIDS (HIV), GLOBOCAN (cancer) and Heartfile (diabetes). Published data from Pakistan reporting fungal infections rates in general and specific populations were reviewed and used when applicable. Estimates were made for the whole population or specific populations at risk, as previously described in the LIFE methodology. Of the 184,500,000 people in Pakistan, an estimated 3,280,549 (1.78%) are affected by a serious fungal infection, omitting all cutaneous infection, oral candidiasis and allergic fungal sinusitis, which we could not estimate. Compared with other countries, the rates of candidaemia (21/100,000) and mucormycosis (14/100,000) are estimated to be very high, and are based on data from India. Chronic pulmonary aspergillosis rates are estimated to be high (39/100,000) because of the high TB burden. Invasive aspergillosis was estimated to be around 5.9/100,000. Fungal keratitis is also problematic in Pakistan, with an estimated rate of 44/100,000. Pakistan probably has a high rate of certain life- or sight-threatening fungal infections.

  20. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  1. Impact of electron-captures on nuclei near N = 50 on core-collapse supernovae

    NASA Astrophysics Data System (ADS)

    Titus, R.; Sullivan, C.; Zegers, R. G. T.; Brown, B. A.; Gao, B.

    2018-01-01

    The sensitivity of the late stages of stellar core collapse to electron-capture rates on nuclei is investigated, with a focus on electron-capture rates on 74 nuclei with neutron number close to 50, just above doubly magic 78Ni. It is demonstrated that variations in key characteristics of the evolution, such as the lepton fraction, electron fraction, entropy, stellar density, and in-fall velocity are about 50% due to uncertainties in the electron-capture rates on nuclei in this region, although thousands of nuclei are included in the simulations. The present electron-capture rate estimates used for the nuclei in this high-sensitivity region of the chart of isotopes are primarily based on a simple approximation, and it is shown that the estimated rates are likely too high, by an order of magnitude or more. Electron-capture rates based on Gamow-Teller strength distributions calculated in microscopic theoretical models will be required to obtain better estimates. Gamow-Teller distributions extracted from charge-exchange experiments performed at intermediate energies serve to guide the development and benchmark the models. A previously compiled weak-rate library that is used in the astrophysical simulations was updated as part of the work presented here, by adding additional rate tables for nuclei near stability for mass numbers between 60 and 110.

  2. Impact of Sofosbuvir-Based Regimens for the Treatment of Hepatitis C After Liver Transplant on Renal Function: Results of a Canadian National Retrospective Study.

    PubMed

    Faisal, Nabiha; Bilodeau, Marc; Aljudaibi, Bandar; Hirch, Geri; Yoshida, Eric M; Hussaini, Trana; Ghali, Maged P; Congly, Stephen E; Ma, Mang M; Lilly, Leslie B

    2018-04-04

    We assessed the impact of sofosbuvir-based regimens on renal function in liver transplant recipients with recurrent hepatitis C virus and the role of renal function on the efficacy and safety of these regimens. In an expanded pan-Canadian cohort, 180 liver transplant recipients were treated with sofosbuvir-based regimens for hepatitis C virus recurrence from January 2014 to May 2015. Mean age was 58 ± 6.85 years, and 50% had F3/4 fibrosis. Patients were stratified into 4 groups based on baseline estimated glomerular filtration rate (calculated by the Modification of Diet in Renal Disease formula): < 30, 30 to 45, 46 to 60, and > 60 mL/min/173 m2. The primary outcome was posttreatment changes in renal function from baseline. Secondary outcomes included sustained virologic response at 12 weeks posttreatment and anemia-related and serious adverse events. Posttreatment renal function was improved in most patients (58%). Renal function declined in 22% of patients, which was more marked in those with estimated glomerular filtration rate < 30 mL/min/173 m2, advanced cirrhosis (P = .05), and aggressive hepatitis C virus/fibrosing cholestatic hepatitis (P < .05). High rates (80%-88%) of sustained virologic response at 12 weeks posttreatment were seen across all renal function strata. Cirrhotic patients with glomerular filtration rates < 30 mL/min/173 m2 had sustained virologic response rates at 12 weeks posttreatment comparable to the overall patient group. Rates of anemia-related adverse events and transfusion requirements increased across decreasing estimated glomerular filtration rate groups, with notably more occurrences with ribavirin-based regimens. Sofosbuvir-based regimens improved overall renal function in liver transplant recipients, with sustained virologic response, suggesting an association of subclinical hepatitis C virus-related renal disease. Sustained virologic response rates at 12 weeks posttreatment (80%-88%) were comparable regardless of baseline renal function but lower in cirrhosis.

  3. Large, Moderate or Small? The Challenge of Measuring Mass Eruption Rates in Volcanic Eruptions

    NASA Astrophysics Data System (ADS)

    Gudmundsson, M. T.; Dürig, T.; Hognadottir, T.; Hoskuldsson, A.; Bjornsson, H.; Barsotti, S.; Petersen, G. N.; Thordarson, T.; Pedersen, G. B.; Riishuus, M. S.

    2015-12-01

    The potential impact of a volcanic eruption is highly dependent on its eruption rate. In explosive eruptions ash may pose an aviation hazard that can extend several thousand kilometers away from the volcano. Models of ash dispersion depend on estimates of the volcanic source, but such estimates are prone to high error margins. Recent explosive eruptions, including the 2010 eruption of Eyjafjallajökull in Iceland, have provided a wealth of data that can help in narrowing these error margins. Within the EU-funded FUTUREVOLC project, a multi-parameter system is currently under development, based on an array of ground and satellite-based sensors and models to estimate mass eruption rates in explosive eruptions in near-real time. Effusive eruptions are usually considered less of a hazard as lava flows travel slower than eruption clouds and affect smaller areas. However, major effusive eruptions can release large amounts of SO2 into the atmosphere, causing regional pollution. In very large effusive eruptions, hemispheric cooling and continent-scale pollution can occur, as happened in the Laki eruption in 1783 AD. The Bárdarbunga-Holuhraun eruption in 2014-15 was the largest effusive event in Iceland since Laki and at times caused high concentrations of SO2. As a result civil protection authorities had to issue warnings to the public. Harmful gas concentrations repeatedly persisted for many hours at a time in towns and villages at distances out to 100-150 km from the vents. As gas fluxes scale with lava fluxes, monitoring of eruption rates is therefore of major importance to constrain not only lava but also volcanic gas emissions. This requires repeated measurements of lava area and thickness. However, most mapping methods are problematic once lava flows become very large. Satellite data on thermal emissions from eruptions have been used with success to estimate eruption rate. SAR satellite data holds potential in delivering lava volume and eruption rate estimates, although availability and repeat times of radar platforms is still low compared to e.g. the thermal satellites. In the 2014-15 eruption, lava volume was estimated repeatedly from an aircraft-based system that combines radar altimeter with an on-board DGPS, yielding a several estimates of lava volume and time-averaged mass eruption rate.

  4. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  5. Comparison of estimation methods for creating small area rates of acute myocardial infarction among Medicare beneficiaries in California.

    PubMed

    Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V

    2015-09-01

    Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  7. Testing survey-based methods for rapid monitoring of child mortality, with implications for summary birth history data.

    PubMed

    Brady, Eoghan; Hill, Kenneth

    2017-01-01

    Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.

  8. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  9. An inverse method to estimate emission rates based on nonlinear least-squares-based ensemble four-dimensional variational data assimilation with local air concentration measurements.

    PubMed

    Geng, Xiaobing; Xie, Zhenghui; Zhang, Lijun; Xu, Mei; Jia, Binghao

    2018-03-01

    An inverse source estimation method is proposed to reconstruct emission rates using local air concentration sampling data. It involves the nonlinear least squares-based ensemble four-dimensional variational data assimilation (NLS-4DVar) algorithm and a transfer coefficient matrix (TCM) created using FLEXPART, a Lagrangian atmospheric dispersion model. The method was tested by twin experiments and experiments with actual Cs-137 concentrations measured around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Emission rates can be reconstructed sequentially with the progression of a nuclear accident, which is important in the response to a nuclear emergency. With pseudo observations generated continuously, most of the emission rates were estimated accurately, except under conditions when the wind blew off land toward the sea and at extremely slow wind speeds near the FDNPP. Because of the long duration of accidents and variability in meteorological fields, monitoring networks composed of land stations only in a local area are unable to provide enough information to support an emergency response. The errors in the estimation compared to the real observations from the FDNPP nuclear accident stemmed from a shortage of observations, lack of data control, and an inadequate atmospheric dispersion model without improvement and appropriate meteorological data. The proposed method should be developed further to meet the requirements of a nuclear emergency response. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Experimental and Estimated Rate Constants for the Reactions of Hydroxyl Radicals with Several Halocarbons

    NASA Technical Reports Server (NTRS)

    DeMore, W.B.

    1996-01-01

    Relative rate experiments are used to measure rate constants and temperature dependencies of the reactions of OH with CH3F (41), CH2FCl (31), CH2BrCl (30B1), CH2Br2 (3OB2), CHBr3 (2OB3), CF2BrCHFCl (123aBl(alpha)), and CF2ClCHCl2 (122). Rate constants for additional compounds of these types are estimated using an empirical rate constant estimation method which is based on measured rate constants for a wide range of halocarbons. The experimental data are combined with the estimated and previously reported rate constants to illustrate the effects of F, Cl, and Br substitution on OH rate constants for a series of 19 halomethanes and 25 haloethanes. Application of the estimation technique is further illustrated for some higher hydrofluorocarbons (HFCs), including CHF2CF2CF2CF2H (338pcc), CF3CHFCHFCF2CF3 (43-10mee), CF3CH2CH2CF3 (356ffa), CF3CH2CF2CH2CF3 (458mfcf), CF3CH2CHF2 (245fa), and CF3CH2CF2CH3 (365mfc). The predictions are compared with literature data for these compounds.

  11. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    PubMed

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  12. The value of Bayes' theorem for interpreting abnormal test scores in cognitively healthy and clinical samples.

    PubMed

    Gavett, Brandon E

    2015-03-01

    The base rates of abnormal test scores in cognitively normal samples have been a focus of recent research. The goal of the current study is to illustrate how Bayes' theorem uses these base rates--along with the same base rates in cognitively impaired samples and prevalence rates of cognitive impairment--to yield probability values that are more useful for making judgments about the absence or presence of cognitive impairment. Correlation matrices, means, and standard deviations were obtained from the Wechsler Memory Scale--4th Edition (WMS-IV) Technical and Interpretive Manual and used in Monte Carlo simulations to estimate the base rates of abnormal test scores in the standardization and special groups (mixed clinical) samples. Bayes' theorem was applied to these estimates to identify probabilities of normal cognition based on the number of abnormal test scores observed. Abnormal scores were common in the standardization sample (65.4% scoring below a scaled score of 7 on at least one subtest) and more common in the mixed clinical sample (85.6% scoring below a scaled score of 7 on at least one subtest). Probabilities varied according to the number of abnormal test scores, base rates of normal cognition, and cutoff scores. The results suggest that interpretation of base rates obtained from cognitively healthy samples must also account for data from cognitively impaired samples. Bayes' theorem can help neuropsychologists answer questions about the probability that an individual examinee is cognitively healthy based on the number of abnormal test scores observed.

  13. Cascade and parallel combination (CPC) of adaptive filters for estimating heart rate during intensive physical exercise from photoplethysmographic signal

    PubMed Central

    Islam, Mohammad Tariqul; Tanvir Ahmed, Sk.; Zabir, Ishmam; Shahnaz, Celia

    2018-01-01

    Photoplethysmographic (PPG) signal is getting popularity for monitoring heart rate in wearable devices because of simplicity of construction and low cost of the sensor. The task becomes very difficult due to the presence of various motion artefacts. In this study, an algorithm based on cascade and parallel combination (CPC) of adaptive filters is proposed in order to reduce the effect of motion artefacts. First, preliminary noise reduction is performed by averaging two channel PPG signals. Next in order to reduce the effect of motion artefacts, a cascaded filter structure consisting of three cascaded adaptive filter blocks is developed where three-channel accelerometer signals are used as references to motion artefacts. To further reduce the affect of noise, a scheme based on convex combination of two such cascaded adaptive noise cancelers is introduced, where two widely used adaptive filters namely recursive least squares and least mean squares filters are employed. Heart rates are estimated from the noise reduced PPG signal in spectral domain. Finally, an efficient heart rate tracking algorithm is designed based on the nature of the heart rate variability. The performance of the proposed CPC method is tested on a widely used public database. It is found that the proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach. PMID:29515812

  14. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.

  15. Catchment-scale groundwater recharge and vegetation water use efficiency

    NASA Astrophysics Data System (ADS)

    Troch, P. A. A.; Dwivedi, R.; Liu, T.; Meira, A.; Roy, T.; Valdés-Pineda, R.; Durcik, M.; Arciniega, S.; Brena-Naranjo, J. A.

    2017-12-01

    Precipitation undergoes a two-step partitioning when it falls on the land surface. At the land surface and in the shallow subsurface, rainfall or snowmelt can either runoff as infiltration/saturation excess or quick subsurface flow. The rest will be stored temporarily in the root zone. From the root zone, water can leave the catchment as evapotranspiration or percolate further and recharge deep storage (e.g. fractured bedrock aquifer). Quantifying the average amount of water that recharges deep storage and sustains low flows is extremely challenging, as we lack reliable methods to quantify this flux at the catchment scale. It was recently shown, however, that for semi-arid catchments in Mexico, an index of vegetation water use efficiency, i.e. the Horton index (HI), could predict deep storage dynamics. Here we test this finding using 247 MOPEX catchments across the conterminous US, including energy-limited catchments. Our results show that the observed HI is indeed a reliable predictor of deep storage dynamics in space and time. We further investigate whether the HI can also predict average recharge rates across the conterminous US. We find that the HI can reliably predict the average recharge rate, estimated from the 50th percentile flow of the flow duration curve. Our results compare favorably with estimates of average recharge rates from the US Geological Survey. Previous research has shown that HI can be reliably estimated based on aridity index, mean slope and mean elevation of a catchment (Voepel et al., 2011). We recalibrated Voepel's model and used it to predict the HI for our 247 catchments. We then used these predicted values of the HI to estimate average recharge rates for our catchments, and compared them with those estimated from observed HI. We find that the accuracies of our predictions based on observed and predicted HI are similar. This provides an estimation method of catchment-scale average recharge rates based on easily derived catchment characteristics, such as climate and topography, and free of discharge measurements.

  16. Comparison of in situ versus in vitro methods of fiber digestion at 120 and 288 hours to quantify the indigestible neutral detergent fiber fraction of corn silage samples.

    PubMed

    Bender, R W; Cook, D E; Combs, D K

    2016-07-01

    Ruminal digestion of neutral detergent fiber (NDF) is affected in part by the proportion of NDF that is indigestible (iNDF), and the rate at which the potentially digestible NDF (pdNDF) is digested. Indigestible NDF in forages is commonly determined as the NDF residue remaining after long-term in situ or in vitro incubations. Rate of pdNDF digestion can be determined by measuring the degradation of NDF in ruminal in vitro or in situ incubations at multiple time points, and fitting the change in residual pdNDF by time with log-transformed linear first order or nonlinear mathematical treatments. The estimate of indigestible fiber is important because it sets the pool size of potentially digestible fiber, which in turn affects the estimate of the proportion of potentially digestible fiber remaining in the time series analysis. Our objective was to compare estimates of iNDF based on in vitro (IV) and in situ (IS) measurements at 2 fermentation end points (120 and 288h). Further objectives were to compare the subsequent rate, lag, and estimated total-tract NDF digestibility (TTNDFD) when iNDF from each method was used with a 7 time point in vitro incubation of NDF to model fiber digestion. Thirteen corn silage samples were dried and ground through a 1-mm screen in a Wiley mill. A 2×2 factorial trial was conducted to determine the effect of time of incubation and method of iNDF analysis on iNDF concentration; the 2 factors were method of iNDF analysis (IS vs. IV) and incubation time (120 vs. 288h). Four sample replicates were used, and approximately 0.5g/sample was weighed into each Ankom F 0285 bag (Ankom Technology, Macedon, NY; pore size=25 µm) for all techniques. The IV-120 had a higher estimate of iNDF (37.8% of NDF) than IS-120 (32.1% of NDF), IV-288 (31.2% of NDF), or IS-288 technique (25.7% of NDF). Each of the estimates of iNDF was then used to calculate the rate of degradation of pdNDF from a 7 time point in vitro incubation. When the IV-120 NDF residue was used, the subsequent rates of pdNDF digestion were fastest (2.8% h(-1)) but the estimate of lag was longest (10.3h), compared with when iNDF was based on the IS-120 or IV-288 NDF residues (rates of 2.3%h(-1) and 2.4%h(-1); lag times of 9.7 and 9.8 h, respectively). Rate of pdNDF degradation was slowest (2.1% h(-1)) when IS-288 NDF residue was used as the estimate of iNDF. The estimate of lag based on IS-288 (9.4h) was similar to lag estimates calculated when IS-120 or IV-288 were used as the estimate of iNDF. The TTNDFD estimates did not differ between treatments (35.5%), however, because differences in estimated pools of iNDF resulted in subsequent changes in rates and lag times of fiber digestion that tended to cancel out. Estimates of fiber digestion kinetic parameters and TTNDFD were similar when fit to either the linear or nonlinear fiber degradation models. All techniques also yielded estimates of iNDF that were higher than predicted iNDF based on the commonly used ratio of 2.4 × lignin. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Lincoln estimates of mallard (Anas platyrhynchos) abundance in North America.

    PubMed

    Alisauskas, Ray T; Arnold, Todd W; Leafloor, James O; Otis, David L; Sedinger, James S

    2014-01-01

    Estimates of range-wide abundance, harvest, and harvest rate are fundamental for sound inferences about the role of exploitation in the dynamics of free-ranging wildlife populations, but reliability of existing survey methods for abundance estimation is rarely assessed using alternative approaches. North American mallard populations have been surveyed each spring since 1955 using internationally coordinated aerial surveys, but population size can also be estimated with Lincoln's method using banding and harvest data. We estimated late summer population size of adult and juvenile male and female mallards in western, midcontinent, and eastern North America using Lincoln's method of dividing (i) total estimated harvest, [Formula: see text], by estimated harvest rate, [Formula: see text], calculated as (ii) direct band recovery rate, [Formula: see text], divided by the (iii) band reporting rate, [Formula: see text]. Our goal was to compare estimates based on Lincoln's method with traditional estimates based on aerial surveys. Lincoln estimates of adult males and females alive in the period June-September were 4.0 (range: 2.5-5.9), 1.8 (range: 0.6-3.0), and 1.8 (range: 1.3-2.7) times larger than respective aerial survey estimates for the western, midcontinent, and eastern mallard populations, and the two population estimates were only modestly correlated with each other (western: r = 0.70, 1993-2011; midcontinent: r = 0.54, 1961-2011; eastern: r = 0.50, 1993-2011). Higher Lincoln estimates are predictable given that the geographic scope of inference from Lincoln estimates is the entire population range, whereas sampling frames for aerial surveys are incomplete. Although each estimation method has a number of important potential biases, our review suggests that underestimation of total population size by aerial surveys is the most likely explanation. In addition to providing measures of total abundance, Lincoln's method provides estimates of fecundity and population sex ratio and could be used in integrated population models to provide greater insights about population dynamics and management of North American mallards and most other harvested species.

  18. TR-BREATH: Time-Reversal Breathing Rate Estimation and Detection.

    PubMed

    Chen, Chen; Han, Yi; Chen, Yan; Lai, Hung-Quoc; Zhang, Feng; Wang, Beibei; Liu, K J Ray

    2018-03-01

    In this paper, we introduce TR-BREATH, a time-reversal (TR)-based contact-free breathing monitoring system. It is capable of breathing detection and multiperson breathing rate estimation within a short period of time using off-the-shelf WiFi devices. The proposed system exploits the channel state information (CSI) to capture the miniature variations in the environment caused by breathing. To magnify the CSI variations, TR-BREATH projects CSIs into the TR resonating strength (TRRS) feature space and analyzes the TRRS by the Root-MUSIC and affinity propagation algorithms. Extensive experiment results indoor demonstrate a perfect detection rate of breathing. With only 10 s of measurement, a mean accuracy of can be obtained for single-person breathing rate estimation under the non-line-of-sight (NLOS) scenario. Furthermore, it achieves a mean accuracy of in breathing rate estimation for a dozen people under the line-of-sight scenario and a mean accuracy of in breathing rate estimation of nine people under the NLOS scenario, both with 63 s of measurement. Moreover, TR-BREATH can estimate the number of people with an error around 1. We also demonstrate that TR-BREATH is robust against packet loss and motions. With the prevailing of WiFi, TR-BREATH can be applied for in-home and real-time breathing monitoring.

  19. Estimation of mortality for stage-structured zooplankton populations: What is to be done?

    NASA Astrophysics Data System (ADS)

    Ohman, Mark D.

    2012-05-01

    Estimation of zooplankton mortality rates in field populations is a challenging task that some contend is inherently intractable. This paper examines several of the objections that are commonly raised to efforts to estimate mortality. We find that there are circumstances in the field where it is possible to sequentially sample the same population and to resolve biologically caused mortality, albeit with error. Precision can be improved with sampling directed by knowledge of the physical structure of the water column, combined with adequate sample replication. Intercalibration of sampling methods can make it possible to sample across the life history in a quantitative manner. Rates of development can be constrained by laboratory-based estimates of stage durations from temperature- and food-dependent functions, mesocosm studies of molting rates, or approximation of development rates from growth rates, combined with the vertical distributions of organisms in relation to food and temperature gradients. Careful design of field studies guided by the assumptions of specific estimation models can lead to satisfactory mortality estimates, but model uncertainty also needs to be quantified. We highlight additional issues requiring attention to further advance the field, including the need for linked cooperative studies of the rates and causes of mortality of co-occurring holozooplankton and ichthyoplankton.

  20. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  1. [Differences in mortality between indigenous and non-indigenous persons in Brazil based on the 2010 Population Census].

    PubMed

    Campos, Marden Barbosa de; Borges, Gabriel Mendes; Queiroz, Bernardo Lanza; Santos, Ricardo Ventura

    2017-06-12

    There have been no previous estimates on differences in adult or overall mortality in indigenous peoples in Brazil, although such indicators are extremely important for reducing social iniquities in health in this population segment. Brazil has made significant strides in recent decades to fill the gaps in data on indigenous peoples in the national statistics. The aim of this paper is to present estimated mortality rates for indigenous and non-indigenous persons in different age groups, based on data from the 2010 Population Census. The estimates used the question on deaths from specific household surveys. The results indicate important differences in mortality rates between indigenous and non-indigenous persons in all the selected age groups and in both sexes. These differences are more pronounced in childhood, especially in girls. The indicators corroborate the fact that indigenous peoples in Brazil are in a situation of extreme vulnerability in terms of their health, based on these unprecedented estimates of the size of these differences.

  2. Estimating and validating harvesting system production through computer simulation

    Treesearch

    John E. Baumgras; Curt C. Hassler; Chris B. LeDoux

    1993-01-01

    A Ground Based Harvesting System Simulation model (GB-SIM) has been developed to estimate stump-to-truck production rates and multiproduct yields for conventional ground-based timber harvesting systems in Appalachian hardwood stands. Simulation results reflect inputs that define harvest site and timber stand attributes, wood utilization options, and key attributes of...

  3. Estimating Children's Soil/Dust Ingestion Rates through ...

    EPA Pesticide Factsheets

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/du

  4. High mitochondrial mutation rates estimated from deep-rooting Costa Rican pedigrees

    PubMed Central

    Madrigal, Lorena; Melendez-Obando, Mauricio; Villegas-Palma, Ramon; Barrantes, Ramiro; Raventos, Henrieta; Pereira, Reynaldo; Luiselli, Donata; Pettener, Davide; Barbujani, Guido

    2012-01-01

    Estimates of mutation rates for the noncoding hypervariable Region I (HVR-I) of mitochondrial DNA (mtDNA) vary widely, depending on whether they are inferred from phylogenies (assuming that molecular evolution is clock-like) or directly from pedigrees. All pedigree-based studies so far were conducted on populations of European origin. In this paper we analyzed 19 deep-rooting pedigrees in a population of mixed origin in Costa Rica. We calculated two estimates of the HVR-I mutation rate, one considering all apparent mutations, and one disregarding changes at sites known to be mutational hot spots and eliminating genealogy branches which might be suspected to include errors, or unrecognized adoptions along the female lines. At the end of this procedure, we still observed a mutation rate equal to 1.24 × 10−6, per site per year, i.e., at least three-fold as high as estimates derived from phylogenies. Our results confirm that mutation rates observed in pedigrees are much higher than estimated assuming a neutral model of long-term HVRI evolution. We argue that, until the cause of these discrepancies will be fully understood, both lower estimates (i.e., those derived from phylogenetic comparisons) and higher, direct estimates such as those obtained in this study, should be considered when modeling evolutionary and demographic processes. PMID:22460349

  5. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  6. 14 CFR Appendix A to Part 1215 - Estimated Service Rates in 1997 Dollars for TDRSS Standard Services (Based on NASA Escalation...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND DATA RELAY SATELLITE SYSTEM (TDRSS) Pt..., return telemetry, or tracking, or any combination of these, the base rate is $184.00 per minute for non-U...

  7. Harvesting costs for management planning for ponderosa pine plantations.

    Treesearch

    Roger D. Fight; Alex Gicqueau; Bruce R. Hartsough

    1999-01-01

    The PPHARVST computer application is Windows-based, public-domain software used to estimate harvesting costs for management planning for ponderosa pine (Pinus ponderosa Dougl. ex Laws.) plantations. The equipment production rates were developed from existing studies. Equipment cost rates were based on 1996 prices for new...

  8. Oxygen transfer rate estimation in oxidation ditches from clean water measurements.

    PubMed

    Abusam, A; Keesman, K J; Meinema, K; Van Straten, G

    2001-06-01

    Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).

  9. Methods of adjusting the stable estimates of fertility for the effects of mortality decline.

    PubMed

    Abou-Gamrah, H

    1976-03-01

    Summary The paper shows how stable population methods, based on the age structure and the rate of increase, may be used to estimate the demographic measures of a quasi-stable population. After a discussion of known methods for adjusting the stable estimates to allow for the effects of mortality decline two new methods are presented, the application of which requires less information. The first method does not need any supplementary information, and the second method requires an estimate of the difference between the last two five-year intercensal rates of increase, i.e. five times the annual change of the rate of increase during the last ten years. For these new methods we do not need to know the onset year of mortality decline as in the Coale-Demeny method, or a long series of rates of increase as in Zachariah's method.

  10. The Galactic Nova Rate Revisited

    NASA Astrophysics Data System (ADS)

    Shafter, A. W.

    2017-01-01

    Despite its fundamental importance, a reliable estimate of the Galactic nova rate has remained elusive. Here, the overall Galactic nova rate is estimated by extrapolating the observed rate for novae reaching m≤slant 2 to include the entire Galaxy using a two component disk plus bulge model for the distribution of stars in the Milky Way. The present analysis improves on previous work by considering important corrections for incompleteness in the observed rate of bright novae and by employing a Monte Carlo analysis to better estimate the uncertainty in the derived nova rates. Several models are considered to account for differences in the assumed properties of bulge and disk nova populations and in the absolute magnitude distribution. The simplest models, which assume uniform properties between bulge and disk novae, predict Galactic nova rates of ˜50 to in excess of 100 per year, depending on the assumed incompleteness at bright magnitudes. Models where the disk novae are assumed to be more luminous than bulge novae are explored, and predict nova rates up to 30% lower, in the range of ˜35 to ˜75 per year. An average of the most plausible models yields a rate of {50}-23+31 yr-1, which is arguably the best estimate currently available for the nova rate in the Galaxy. Virtually all models produce rates that represent significant increases over recent estimates, and bring the Galactic nova rate into better agreement with that expected based on comparison with the latest results from extragalactic surveys.

  11. Estimating the effect of a rare time-dependent treatment on the recurrent event rate.

    PubMed

    Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E

    2018-05-30

    In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence.

    PubMed

    Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.

  13. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    PubMed Central

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430

  14. A Practical Guide for Estimating Dietary Fat and Fiber Using Limited Food Frequency Data.

    ERIC Educational Resources Information Center

    Neale, Anne Victoria; And Others

    1992-01-01

    A methodology is presented for estimating daily intake of dietary fat and fiber based on limited food frequency data. The procedure, which relies on National Food Consumption Survey data and daily consumption rates, can provide baseline estimates of dietary patterns for health promotion policymakers. (SLD)

  15. 40 CFR 98.295 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... value shall be the best available estimate(s) of the parameter(s), based on all available process data or data used for accounting purposes. (c) For each missing value collected during the performance test (hourly CO2 concentration, stack gas volumetric flow rate, or average process vent flow from mine...

  16. How Metastrategic Considerations Influence the Selection of Frequency Estimation Strategies

    ERIC Educational Resources Information Center

    Brown, Norman R.

    2008-01-01

    Prior research indicates that enumeration-based frequency estimation strategies become increasingly common as memory for relevant event instances improves and that moderate levels of context memory are associated with moderate rates of enumeration [Brown, N. R. (1995). Estimation strategies and the judgment of event frequency. Journal of…

  17. Comparing UK, USA and Australian values for EQ-5D as a health utility measure of oral health.

    PubMed

    Brennan, D S; Teusner, D N

    2015-09-01

    Using generic measures to examine outcomes of oral disorders can add additional information relating to health utility. However, different algorithms are available to generate health states. The aim was to assess UK-, US- and Australian-based algorithms for the EuroQol (EQ-5D) in relation to their discriminative and convergent validity. Methods: Data were collected from adults in Australia aged 30-61 years by mailed survey in 2009-10, including the EQ-5D and a range of self-reported oral health variables, and self-rated oral and general health. Responses were collected from n=1,093 persons (response rate 39.1%). UK-based EQ-5D estimates were lower (0.85) than the USA and Australian estimates (0.91). EQ-5D was associated (p<0.01) with all seven oral health variables, with differences in utility scores ranging from 0.03 to 0.06 for the UK, from 0.04 to 0.07 for the USA, and from 0.05 to 0.08 for the Australian-based estimates. The effect sizes (ESs) of the associations with all seven oral health variables were similar for the UK (ES=0.26 to 0.49), USA (ES=0.31 to 0.48) and Australian-based (ES=0.31 to 0.46) estimates. EQ-5D was correlated with global dental health for the UK (rho=0.29), USA (rho=0.30) and Australian-based estimates (rho=0.30), and correlations with global general health were the same (rho=0.42) for the UK, USA and Australian-based estimates. EQ-5D exhibited equivalent discriminative validity and convergent validity in relation to oral health variables for the UK, USA and Australian-based estimates.

  18. Input-decomposition balance of heterotrophic processes in a warm-temperate mixed forest in Japan

    NASA Astrophysics Data System (ADS)

    Jomura, M.; Kominami, Y.; Ataka, M.; Makita, N.; Dannoura, M.; Miyama, T.; Tamai, K.; Goto, Y.; Sakurai, S.

    2010-12-01

    Carbon accumulation in forest ecosystem has been evaluated using three approaches. One is net ecosystem exchange (NEE) estimated by tower flux measurement. The second is net ecosystem production (NEP) estimated by biometric measurements. NEP can be expressed as the difference between net primary production and heterotrophic respiration. NEP can also be expressed as the annual increment in the plant biomass (ΔW) plus soil (ΔS) carbon pools defined as follows; NEP = ΔW+ΔS The third approach needs to evaluate annual carbon increment in soil compartment. Soil carbon accumulation rate could not be measured directly in a short term because of the small amount of annual accumulation. Soil carbon accumulation rate can be estimated by a model calculation. Rothamsted carbon model is a soil organic carbon turnover model and a useful tool to estimate the rate of soil carbon accumulation. However, the model has not sufficiently included variations in decomposition processes of organic matters in forest ecosystems. Organic matter in forest ecosystems have a different turnover rate that creates temporal variations in input-decomposition balance and also have a large variation in spatial distribution. Thus, in order to estimate the rate of soil carbon accumulation, temporal and spatial variation in input-decomposition balance of heterotrophic processes should be incorporated in the model. In this study, we estimated input-decomposition balance and the rate of soil carbon accumulation using the modified Roth-C model. We measured respiration rate of many types of organic matters, such as leaf litter, fine root litter, twigs and coarse woody debris using a chamber method. We can illustrate the relation of respiration rate to diameter of organic matters. Leaf and fine root litters have no diameter, so assumed to be zero in diameter. Organic matters in small size, such as leaf and fine root litter, have high decomposition respiration. It could be caused by the difference in structure of organic matter. Because coarse woody debris has shape of cylinder, microbes decompose from the surface of it. Thus, respiration rate of coarse woody debris is lower than that of leaf and fine root litter. Based on this result, we modified Roth-C model and estimate soil carbon accumulation rate in recent years. Based on the results from a soil survey, the forest soil stored 30tC ha-1 in O and A horizon. We can evaluate the modified model using this result. NEP can be expressed as the annual increment in the plant biomass plus soil carbon pools. So if we can estimate NEP using this approach, then we can evaluate NEP estimated by micrometeorological and ecological approaches and reduce uncertainty of NEP estimation.

  19. Estimates of cancer burden in Abruzzo and Molise.

    PubMed

    Foschi, Roberto; Viviano, Lorena; Rossi, Silvia

    2013-01-01

    Abruzzo and Molise are two regions located in the south of Italy, currently without population-based cancer registries. The aim of this paper is to provide estimates of cancer incidence, mortality and prevalence for the Abruzzo and Molise regions combined. The MIAMOD method, a back-calculation approach to estimate and project the incidence of chronic diseases from mortality and patient survival, was used for the estimation of incidence and prevalence by calendar year (from 1970 to 2015) and age (from 0 to 99). The survival estimates are based on cancer registry data of southern Italy. The most frequently diagnosed cancers were those of the colon and rectum, breast and prostate, with 1,394, 1,341 and 698 new diagnosed cases, respectively, estimated in 2012. Incidence rates were estimated to increase constantly for female breast cancer, colorectal cancer in men and melanoma in both sexes. For prostate cancer and male lung cancer, the incidence rates increased, reaching a peak, and then decreased. In women the incidence of colorectal and lung cancer stabilized after an initial increase. For stomach and cervical cancers, the incidence rates showed a constant decrease. Prevalence was increasing for all the considered cancer sites with the exception of the cervix uteri. The highest prevalence values were estimated for breast and colorectal cancer with about 12,300 and over 8,200 cases in 2012, respectively. In the 2000s the mortality rates declined for all cancers except skin melanoma and female lung cancer, for which the mortality was almost stable. This paper provides a description of the burden of the major cancers in Abruzzo and Molise until 2015. The increase in cancer survival, added to population aging, will inflate the cancer prevalence. In order to better evaluate the cancer burden in the two regions, it would be important to implement cancer registration.

  20. Estimating Bat and Bird Mortality Occurring at Wind Energy Turbines from Covariates and Carcass Searches Using Mixture Models

    PubMed Central

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144

  1. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    PubMed

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  2. Estimating mortality rates of adult fish from entrainment through the propellers of river towboats

    USGS Publications Warehouse

    Gutreuter, S.; Dettmers, J.M.; Wahl, David H.

    2003-01-01

    We developed a method to estimate mortality rates of adult fish caused by entrainment through the propellers of commercial towboats operating in river channels. The method combines trawling while following towboats (to recover a fraction of the kills) and application of a hydrodynamic model of diffusion (to estimate the fraction of the total kills collected in the trawls). The sampling problem is unusual and required quantifying relatively rare events. We first examined key statistical properties of the entrainment mortality rate estimators using Monte Carlo simulation, which demonstrated that a design-based estimator and a new ad hoc estimator are both unbiased and converge to the true value as the sample size becomes large. Next, we estimated the entrainment mortality rates of adult fishes in Pool 26 of the Mississippi River and the Alton Pool of the Illinois River, where we observed kills that we attributed to entrainment. Our estimates of entrainment mortality rates were 2.52 fish/km of towboat travel (80% confidence interval, 1.00-6.09 fish/km) for gizzard shad Dorosoma cepedianum, 0.13 fish/km (0.00-0.41) for skipjack herring Alosa chrysochloris, and 0.53 fish/km (0.00-1.33) for both shovelnose sturgeon Scaphirhynchus platorynchus and smallmouth buffalo Ictiobus bubalus. Our approach applies more broadly to commercial vessels operating in confined channels, including other large rivers and intracoastal waterways.

  3. Combining Satellite Microwave Radiometer and Radar Observations to Estimate Atmospheric Latent Heating Profiles

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo

    2009-01-01

    In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2

  4. Time cost of child rearing and its effect on women's uptake of free health checkups in Japan.

    PubMed

    Anezaki, Hisataka; Hashimoto, Hideki

    2018-05-01

    Women of child-rearing age have the lowest uptake rates for health checkups in several developed countries. The time cost incurred by conflicting child-rearing roles may contribute to this gap in access to health checkups. We estimated the time cost of child rearing empirically, and analyzed its potential impact on uptake of free health checkups based on a sample of 1606 women with a spouse/partner from the dataset of a population-based survey conducted in the greater Tokyo metropolitan area in 2010. We used a selection model to estimate the counterfactual wage of non-working mothers, and estimated the number of children using a simultaneous equation model to account for the endogeneity between job participation and child rearing. The time cost of child rearing was obtained based on the estimated effects of women's wages and number of children on job participation. We estimated the time cost to mothers of rearing a child aged 0-3 years as 16.9 USD per hour, and the cost for a child aged 4-5 years as 15.0 USD per hour. Based on this estimation, the predicted uptake rate of women who did not have a child was 61.7%, while the predicted uptake rates for women with a child aged 0-3 and 4-5 were 54.2% and 58.6%, respectively. These results suggest that, although Japanese central/local governments provide free health checkup services, this policy does not fully compensate for the time cost of child rearing. It is strongly recommended that policies should be developed to address the time cost of child rearing, with the aim of closing the gender gap and securing universal access to preventive healthcare services in Japan. Copyright © 2018. Published by Elsevier Ltd.

  5. Evaluating Exposure-Response Associations for Non-Hodgkin Lymphoma with Varying Methods of Assigning Cumulative Benzene Exposure in the Shanghai Women's Health Study.

    PubMed

    Friesen, Melissa C; Bassig, Bryan A; Vermeulen, Roel; Shu, Xiao-Ou; Purdue, Mark P; Stewart, Patricia A; Xiang, Yong-Bing; Chow, Wong-Ho; Ji, Bu-Tian; Yang, Gong; Linet, Martha S; Hu, Wei; Gao, Yu-Tang; Zheng, Wei; Rothman, Nathaniel; Lan, Qing

    2017-01-01

    To provide insight into the contributions of exposure measurements to job exposure matrices (JEMs), we examined the robustness of an association between occupational benzene exposure and non-Hodgkin lymphoma (NHL) to varying exposure assessment methods. NHL risk was examined in a prospective population-based cohort of 73087 women in Shanghai. A mixed-effects model that combined a benzene JEM with >60000 short-term, area benzene inspection measurements was used to derive two sets of measurement-based benzene estimates: 'job/industry-specific' estimates (our presumed best approach) were derived from the model's fixed effects (year, JEM intensity rating) and random effects (occupation, industry); 'calibrated JEM' estimates were derived using only the fixed effects. 'Uncalibrated JEM' (using the ordinal JEM ratings) and exposure duration estimates were also calculated. Cumulative exposure for each subject was calculated for each approach based on varying exposure definitions defined using the JEM's probability ratings. We examined the agreement between the cumulative metrics and evaluated changes in the benzene-NHL associations. For our primary exposure definition, the job/industry-specific estimates were moderately to highly correlated with all other approaches (Pearson correlation 0.61-0.89; Spearman correlation > 0.99). All these metrics resulted in statistically significant exposure-response associations for NHL, with negligible gain in model fit from using measurement-based estimates. Using more sensitive or specific exposure definitions resulted in elevated but non-significant associations. The robust associations observed here with varying benzene assessment methods provide support for a benzene-NHL association. While incorporating exposure measurements did not improve model fit, the measurements allowed us to derive quantitative exposure-response curves. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  6. Estimation of incidence and recovery rates of Plasmodium falciparum parasitaemia from longitudinal data

    PubMed Central

    Bekessy, A.; Molineaux, L.; Storey, J.

    1976-01-01

    A method is described of estimating the malaria incidence rate ĥ and the recovery rate r from longitudinal data. The method is based on the assumption that the phenomenon of patent parasitaemia can be represented by a reversible two-state catalytic model; it is applicable to all problems that can be represented by such a model. The method was applied to data on falciparum malaria from the West African savanna and the findings suggested that immunity increases the rate of recovery from patent parasitaemia by a factor of up to 10, and also reduces the number of episodes of patent parasitaemia resulting from one inoculation. Under the effect of propoxur, ĥ varies with the estimated man-biting rate of the vector while r̂ increases, possibly owing to reduced super-infection. PMID:800968

  7. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  8. Quadratic semiparametric Von Mises calculus

    PubMed Central

    Robins, James; Li, Lingling; Tchetgen, Eric

    2009-01-01

    We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487

  9. Leveraging Distant Relatedness to Quantify Human Mutation and Gene-Conversion Rates

    PubMed Central

    Palamara, Pier Francesco; Francioli, Laurent C.; Wilton, Peter R.; Genovese, Giulio; Gusev, Alexander; Finucane, Hilary K.; Sankararaman, Sriram; Sunyaev, Shamil R.; de Bakker, Paul I.W.; Wakeley, John; Pe’er, Itsik; Price, Alkes L.

    2015-01-01

    The rate at which human genomes mutate is a central biological parameter that has many implications for our ability to understand demographic and evolutionary phenomena. We present a method for inferring mutation and gene-conversion rates by using the number of sequence differences observed in identical-by-descent (IBD) segments together with a reconstructed model of recent population-size history. This approach is robust to, and can quantify, the presence of substantial genotyping error, as validated in coalescent simulations. We applied the method to 498 trio-phased sequenced Dutch individuals and inferred a point mutation rate of 1.66 × 10−8 per base per generation and a rate of 1.26 × 10−9 for <20 bp indels. By quantifying how estimates varied as a function of allele frequency, we inferred the probability that a site is involved in non-crossover gene conversion as 5.99 × 10−6. We found that recombination does not have observable mutagenic effects after gene conversion is accounted for and that local gene-conversion rates reflect recombination rates. We detected a strong enrichment of recent deleterious variation among mismatching variants found within IBD regions and observed summary statistics of local sharing of IBD segments to closely match previously proposed metrics of background selection; however, we found no significant effects of selection on our mutation-rate estimates. We detected no evidence of strong variation of mutation rates in a number of genomic annotations obtained from several recent studies. Our analysis suggests that a mutation-rate estimate higher than that reported by recent pedigree-based studies should be adopted in the context of DNA-based demographic reconstruction. PMID:26581902

  10. Novel health monitoring method using an RGB camera.

    PubMed

    Hassan, M A; Malik, A S; Fofi, D; Saad, N; Meriaudeau, F

    2017-11-01

    In this paper we present a novel health monitoring method by estimating the heart rate and respiratory rate using an RGB camera. The heart rate and the respiratory rate are estimated from the photoplethysmography (PPG) and the respiratory motion. The method mainly operates by using the green spectrum of the RGB camera to generate a multivariate PPG signal to perform multivariate de-noising on the video signal to extract the resultant PPG signal. A periodicity based voting scheme (PVS) was used to measure the heart rate and respiratory rate from the estimated PPG signal. We evaluated our proposed method with a state of the art heart rate measuring method for two scenarios using the MAHNOB-HCI database and a self collected naturalistic environment database. The methods were furthermore evaluated for various scenarios at naturalistic environments such as a motion variance session and a skin tone variance session. Our proposed method operated robustly during the experiments and outperformed the state of the art heart rate measuring methods by compensating the effects of the naturalistic environment.

  11. Students Left Behind: Measuring 10th to 12th Grade Student Persistence Rates in Texas High Schools

    PubMed Central

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2012-01-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools and the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policy-makers about how to measure high school graduation rates. In this paper, we use longitudinal data tracking a cohort of students at 82 Texas public high schools to assess the accuracy and precision of three widely-used high school graduation rate measures: Texas’s official graduation rates, and two competing estimates based on publicly available enrollment data from the Common Core of Data. Our analyses show that these widely-used approaches yield inaccurate and highly imprecise estimates of high school graduation and persistence rates. We propose several guidelines for using existing graduation and persistence rate data and argue that a national effort to track students as they progress through high school is essential to reconcile conflicting estimates. PMID:23077375

  12. Students Left Behind: Measuring 10(th) to 12(th) Grade Student Persistence Rates in Texas High Schools.

    PubMed

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2010-06-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools and the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policy-makers about how to measure high school graduation rates. In this paper, we use longitudinal data tracking a cohort of students at 82 Texas public high schools to assess the accuracy and precision of three widely-used high school graduation rate measures: Texas's official graduation rates, and two competing estimates based on publicly available enrollment data from the Common Core of Data. Our analyses show that these widely-used approaches yield inaccurate and highly imprecise estimates of high school graduation and persistence rates. We propose several guidelines for using existing graduation and persistence rate data and argue that a national effort to track students as they progress through high school is essential to reconcile conflicting estimates.

  13. Turbulent Reconnection Rates from Cluster Observations in the Magnetosheath

    NASA Technical Reports Server (NTRS)

    Wendel, Deirdre

    2011-01-01

    The role of turbulence in producing fast reconnection rates is an important unresolved question. Scant in situ analyses exist. We apply multiple spacecraft techniques to a case of nonlinear turbulent reconnection in the magnetosheath to test various theoretical results for turbulent reconnection rates. To date, in situ estimates of the contribution of turbulence to reconnection rates have been calculated from an effective electric field derived through linear wave theory. However, estimates of reconnection rates based on fully nonlinear turbulence theories and simulations exist that are amenable to multiple spacecraft analyses. Here we present the linear and nonlinear theories and apply some of the nonlinear rates to Cluster observations of reconnecting, turbulent current sheets in the magnetosheath. We compare the results to the net reconnection rate found from the inflow speed. Ultimately, we intend to test and compare linear and nonlinear estimates of the turbulent contribution to reconnection rates and to measure the relative contributions of turbulence and the Hall effect.

  14. Turbulent Reconnection Rates from Cluster Observations in the Magneto sheath

    NASA Technical Reports Server (NTRS)

    Wendel, Deirdre

    2011-01-01

    The role of turbulence in producing fast reconnection rates is an important unresolved question. Scant in situ analyses exist. We apply multiple spacecraft techniques to a case of nonlinear turbulent reconnection in the magnetosheath to test various theoretical results for turbulent reconnection rates. To date, in situ estimates of the contribution of turbulence to reconnection rates have been calculated from an effective electric field derived through linear wave theory. However, estimates of reconnection rates based on fully nonlinear turbulence theories and simulations exist that are amenable to multiple spacecraft analyses. Here we present the linear and nonlinear theories and apply some of the nonlinear rates to Cluster observations of reconnecting, turbulent current sheets in the magnetos heath. We compare the results to the net reconnection rate found from the inflow speed. Ultimately, we intend to test and compare linear and nonlinear estimates of the turbulent contribution to reconnection rates and to measure the relative contributions of turbulence and the Hall effect.

  15. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    PubMed

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  16. Estimation of dioxin and furan elimination rates with a pharmacokinetic model.

    PubMed

    Van der Molen, G W; Kooijman, B A; Wittsiepe, J; Schrey, P; Flesch-Janys, D; Slob, W

    2000-01-01

    Quantitative description of the pharmacokinetics of dioxins and furans in humans can be of great help for the assessment of health risks posed by these compounds. To that the elimination rates of sixteen 2,3,7,8-chlorinated dibenzodioxins and dibenzofurans are estimated from both a longitudinal and a cross-sectional data set using the model of Van der Molen et al. [Van der Molen G.W., Kooijman S.A.L.M., and Slob W. A generic toxicokinetic model for persistent lipophilic compounds in humans: an application to TCDD. Fundam Appl Toxicol 1996: 31: 83-94]. In this model the elimination rate is given by the (constant) specific elimination rate multiplied with the ratio between the lipid weight of the liver and total body lipid weight. Body composition, body weight and intake are assumed to depend on age. The elimination rate is, therefore, not constant. For 49-year-old males, the elimination rate estimates range between 0.03 per year for 1,2,3,6,7,8-hexaCDF to 1.0 per year for octaCDF. The elimination rates of the most toxic congeners, 2,3,7,8-tetraCDD, 1,2,3,7,8-pentaCDD, and 2,3,4,7,8-pentaCDF, were estimated at 0.09, 0.06, and 0.07, respectively, based on the cross-sectional data, and 0.11, 0.09, and 0.09 based on the longitudinal data. The elimination rates of dioxins decrease with age between 0.0011 per year for 1,2,3,6,7,8-hexaCDD and 0.0035 per year for 1,2,3,4,6,7,8-heptaCDD. For furans the average decrease is 0.0033 per year. The elimination rates were estimated both from a longitudinal and a cross-sectional data set, and agreed quite well with each other, after taking account of historical changes in average intake levels.

  17. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data.

    PubMed

    Perez, S Ivan; Tejedor, Marcelo F; Novo, Nelson M; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27-31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21-29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade.

  18. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data

    PubMed Central

    Perez, S. Ivan; Tejedor, Marcelo F.; Novo, Nelson M.; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27–31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21–29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade. PMID:23826358

  19. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  20. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  1. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  2. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  3. Estimation of an optimal chemotherapy utilisation rate for cancer: setting an evidence-based benchmark for quality cancer care.

    PubMed

    Jacob, S A; Ng, W L; Do, V

    2015-02-01

    There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  4. BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliadis, C.; Anderson, K. S.; Coc, A.

    The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We presentmore » astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.« less

  5. A comparison of recharge rates in aquifers of the United States based on groundwater-age data

    USGS Publications Warehouse

    McMahon, P.B.; Plummer, Niel; Böhlke, J.K.; Shapiro, S.D.; Hinkle, S.R.

    2011-01-01

    An overview is presented of existing groundwater-age data and their implications for assessing rates and timescales of recharge in selected unconfined aquifer systems of the United States. Apparent age distributions in aquifers determined from chlorofluorocarbon, sulfur hexafluoride, tritium/helium-3, and radiocarbon measurements from 565 wells in 45 networks were used to calculate groundwater recharge rates. Timescales of recharge were defined by 1,873 distributed tritium measurements and 102 radiocarbon measurements from 27 well networks. Recharge rates ranged from < 10 to 1,200 mm/yr in selected aquifers on the basis of measured vertical age distributions and assuming exponential age gradients. On a regional basis, recharge rates based on tracers of young groundwater exhibited a significant inverse correlation with mean annual air temperature and a significant positive correlation with mean annual precipitation. Comparison of recharge derived from groundwater ages with recharge derived from stream base-flow evaluation showed similar overall patterns but substantial local differences. Results from this compilation demonstrate that age-based recharge estimates can provide useful insights into spatial and temporal variability in recharge at a national scale and factors controlling that variability. Local age-based recharge estimates provide empirical data and process information that are needed for testing and improving more spatially complete model-based methods.

  6. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  7. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  8. Quantitative measurement of protein digestion in simulated gastric fluid.

    PubMed

    Herman, Rod A; Korjagin, Valerie A; Schafer, Barry W

    2005-04-01

    The digestibility of novel proteins in simulated gastric fluid is considered to be an indicator of reduced risk of allergenic potential in food, and estimates of digestibility for transgenic proteins expressed in crops are required for making a human-health risk assessment by regulatory authorities. The estimation of first-order rate constants for digestion under conditions of low substrate concentration was explored for two protein substrates (azocoll and DQ-ovalbumin). Data conformed to first-order kinetics, and half-lives were relatively insensitive to significant variations in both substrate and pepsin concentration when high purity pepsin preparations were used. Estimation of digestion efficiency using densitometric measurements of relative protein concentration based on SDS-PAGE corroborated digestion estimates based on measurements of dye or fluorescence release from the labeled substrates. The suitability of first-order rate constants for estimating the efficiency of the pepsin digestion of novel proteins is discussed. Results further support a kinetic approach as appropriate for comparing the digestibility of proteins in simulated gastric fluid.

  9. Augmenting Satellite Precipitation Estimation with Lightning Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahrooghy, Majid; Anantharaj, Valentine G; Younan, Nicolas H.

    2013-01-01

    We have used lightning information to augment the Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network - Cloud Classification System (PERSIANN-CCS). Co-located lightning data are used to segregate cloud patches, segmented from GOES-12 infrared data, into either electrified (EL) or non-electrified (NEL) patches. A set of features is extracted separately for the EL and NEL cloud patches. The features for the EL cloud patches include new features based on the lightning information. The cloud patches are classified and clustered using self-organizing maps (SOM). Then brightness temperature and rain rate (T-R) relationships are derived for the different clusters.more » Rain rates are estimated for the cloud patches based on their representative T-R relationship. The Equitable Threat Score (ETS) for daily precipitation estimates is improved by almost 12% for the winter season. In the summer, no significant improvements in ETS are noted.« less

  10. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  11. Tree mortality rates and tree population projections in Baltimore, Maryland, USA

    Treesearch

    David J. Nowak; Miki Kuroda; Daniel E. Crane

    2004-01-01

    Based on re-measurements (1999 and 2001) of randomly-distributed permanent plots within the city boundaries of Baltimore, Maryland, trees are estimated to have an annual mortality rate of 6.6% with an overall annual net change in the number of live trees of -4.2%. Tree mortality rates were significantly different based on tree size, condition, species, and Land use....

  12. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  13. A comparison of ground-based and aircraft-based methane emission flux estimates in a western oil and natural gas production basin

    NASA Astrophysics Data System (ADS)

    Snare, Dustin A.

    Recent increases in oil and gas production from unconventional reservoirs has brought with it an increase of methane emissions. Estimating methane emissions from oil and gas production is complex due to differences in equipment designs, maintenance, and variable product composition. Site access to oil and gas production equipment can be difficult and time consuming, making remote assessment of emissions vital to understanding local point source emissions. This work presents measurements of methane leakage made from a new ground-based mobile laboratory and a research aircraft around oil and gas fields in the Upper Green River Basin (UGRB) of Wyoming in 2014. It was recently shown that the application of the Point Source Gaussian (PSG) method, utilizing atmospheric dispersion tables developed by US EPA (Appendix B), is an effective way to accurately measure methane flux from a ground-based location downwind of a source without the use of a tracer (Brantley et al., 2014). Aircraft measurements of methane enhancement regions downwind of oil and natural gas production and Planetary Boundary Layer observations are utilized to obtain a flux for the entire UGRB. Methane emissions are compared to volumes of natural gas produced to derive a leakage rate from production operations for individual production sites and basin-wide production. Ground-based flux estimates derive a leakage rate of 0.14 - 0.78 % (95 % confidence interval) per site with a mass-weighted average (MWA) of 0.20 % for all sites. Aircraft-based flux estimates derive a MWA leakage rate of 0.54 - 0.91 % for the UGRB.

  14. Rainfall Measurement with a Ground Based Dual Frequency Radar

    NASA Technical Reports Server (NTRS)

    Takahashi, Nobuhiro; Horie, Hiroaki; Meneghini, Robert

    1997-01-01

    Dual frequency methods are one of the most useful ways to estimate precise rainfall rates. However, there are some difficulties in applying this method to ground based radars because of the existence of a blind zone and possible error in the radar calibration. Because of these problems, supplemental observations such as rain gauges or satellite link estimates of path integrated attenuation (PIA) are needed. This study shows how to estimate rainfall rate with a ground based dual frequency radar with rain gauge and satellite link data. Applications of this method to stratiform rainfall is also shown. This method is compared with single wavelength method. Data were obtained from a dual frequency (10 GHz and 35 GHz) multiparameter radar radiometer built by the Communications Research Laboratory (CRL), Japan, and located at NASA/GSFC during the spring of 1997. Optical rain gauge (ORG) data and broadcasting satellite signal data near the radar t location were also utilized for the calculation.

  15. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  16. Estimating Chikungunya prevalence in La Réunion Island outbreak by serosurveys: Two methods for two critical times of the epidemic

    PubMed Central

    Gérardin, Patrick; Guernier, Vanina; Perrau, Joëlle; Fianu, Adrian; Le Roux, Karin; Grivard, Philippe; Michault, Alain; de Lamballerie, Xavier; Flahault, Antoine; Favier, François

    2008-01-01

    Background Chikungunya virus (CHIKV) caused a major two-wave seventeen-month-long outbreak in La Réunion Island in 2005–2006. The aim of this study was to refine clinical estimates provided by a regional surveillance-system using a two-stage serological assessment as gold standard. Methods Two serosurveys were implemented: first, a rapid survey using stored sera of pregnant women, in order to assess the attack rate at the epidemic upsurge (s1, February 2006; n = 888); second, a population-based survey among a random sample of the community, to assess the herd immunity in the post-epidemic era (s2, October 2006; n = 2442). Sera were screened for anti-CHIKV specific antibodies (IgM and IgG in s1, IgG only in s2) using enzyme-linked immunosorbent assays. Seroprevalence rates were compared to clinical estimates of attack rates. Results In s1, 18.2% of the pregnant women were tested positive for CHIKV specific antibodies (13.8% for both IgM and IgG, 4.3% for IgM, 0.1% for IgG only) which provided a congruent estimate with the 16.5% attack rate calculated from the surveillance-system. In s2, the seroprevalence in community was estimated to 38.2% (95% CI, 35.9 to 40.6%). Extrapolations of seroprevalence rates led to estimate, at 143,000 and at 300,000 (95% CI, 283,000 to 320,000), the number of people infected in s1 and in s2, respectively. In comparison, the surveillance-system estimated at 130,000 and 266,000 the number of people infected for the same periods. Conclusion A rapid serosurvey in pregnant women can be helpful to assess the attack rate when large seroprevalence studies cannot be done. On the other hand, a population-based serosurvey is useful to refine the estimate when clinical diagnosis underestimates it. Our findings give valuable insights to assess the herd immunity along the course of epidemics. PMID:18662384

  17. Vehicle Lateral State Estimation Based on Measured Tyre Forces

    PubMed Central

    Tuononen, Ari J.

    2009-01-01

    Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535

  18. Valence-Dependent Belief Updating: Computational Validation

    PubMed Central

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499

  19. Valence-Dependent Belief Updating: Computational Validation.

    PubMed

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.

  20. Concurrent estimates of carbon export reveal physical biases in ΔO2/Ar-based net community production estimates in the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Haskell, William Z.; Fleming, John C.

    2018-07-01

    Net community production (NCP) represents the amount of biologically-produced organic carbon that is available to be exported out of the surface ocean and is typically estimated using measurements of the O2/Ar ratio in the surface mixed layer under the assumption of negligible vertical transport. However, physical processes can significantly bias NCP estimates based on this in-situ tracer. It is actively debated whether discrepancies between O2/Ar-based NCP and carbon export estimates are due to differences in the location of biological production and export, or the result of physical biases. In this study, we calculate export production across the euphotic depth during two months of upwelling in Southern California in 2014, based on an estimate of the consumption rate of dissolved organic carbon (DOC) and the dissolved: total organic carbon consumption ratio below the euphotic depth. This estimate equals the concurrent O2/Ar-based NCP estimates over the same period that are corrected for physical biases, but is significantly different than NCP estimated without a correction for vertical transport. This comparison demonstrates that concurrent physical transport estimates would significantly improve O2/Ar-based estimates of NCP, particularly in settings with vertical advection. Potential approaches to mitigate this bias are discussed.

  1. Use of Angiotensin-Converting Enzyme Inhibitors and Angiotensin Receptor Blockers for Geriatric Ischemic Stroke Patients: Are the Rates Right?

    PubMed

    Brooks, John M; Chapman, Cole G; Suneja, Manish; Schroeder, Mary C; Fravel, Michelle A; Schneider, Kathleen M; Wilwert, June; Li, Yi-Jhen; Chrischilles, Elizabeth A; Brenton, Douglas W; Brenton, Marian; Robinson, Jennifer

    2018-05-30

    Our objective is to estimate the effects associated with higher rates of renin-angiotensin system antagonists, angiotensin-converting enzyme inhibitors and angiotensin receptor blockers (ACEI/ARBs), in secondary prevention for geriatric (aged >65 years) patients with new ischemic strokes by chronic kidney disease (CKD) status. The effects of ACEI/ARBs on survival and renal risk were estimated by CKD status using an instrumental variable (IV) estimator. Instruments were based on local area variation in ACEI/ARB use. Data abstracted from charts were used to assess the assumptions underlying the instrumental estimator. ACEI/ARBs were used after stroke by 45.9% and 45.2% of CKD and non-CKD patients, respectively. ACEI/ARB rate differences across local areas grouped by practice styles were nearly identical for CKD and non-CKD patients. Higher ACEI/ARB use rates for non-CKD patients were associated with higher 2-year survival rates, whereas higher ACEI/ARB use rates for patients with CKD were associated with lower 2-year survival rates. While the negative survival estimates for patients with CKD were not statistically different from zero, they were statistically lower than the estimates for non-CKD patients. Confounders abstracted from charts were not associated with the instrumental variable used. Higher ACEI/ARB use rates had different survival implications for older ischemic stroke patients with and without CKD. ACEI/ARBs appear underused in ischemic stroke patients without CKD as higher use rates were associated with higher 2-year survival rates. This conclusion is not generalizable to the ischemic stroke patients with CKD, as higher ACEI/ARBS use rates were associated with lower 2-year survival rates that were statistically lower than the estimates for non-CKD patients. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  2. Regional estimation of base recharge to ground water using water balance and a base-flow index.

    PubMed

    Szilagyi, Jozsef; Harvey, F Edwin; Ayers, Jerry F

    2003-01-01

    Naturally occurring long-term mean annual base recharge to ground water in Nebraska was estimated with the help of a water-balance approach and an objective automated technique for base-flow separation involving minimal parameter-optimization requirements. Base recharge is equal to total recharge minus the amount of evapotranspiration coming directly from ground water. The estimation of evapotranspiration in the water-balance equation avoids the need to specify a contributing drainage area for ground water, which in certain cases may be considerably different from the drainage area for surface runoff. Evapotranspiration was calculated by the WREVAP model at the Solar and Meteorological Surface Observation Network (SAMSON) sites. Long-term mean annual base recharge was derived by determining the product of estimated long-term mean annual runoff (the difference between precipitation and evapotranspiration) and the base-flow index (BFI). The BFI was calculated from discharge data obtained from the U.S. Geological Survey's gauging stations in Nebraska. Mapping was achieved by using geographic information systems (GIS) and geostatistics. This approach is best suited for regional-scale applications. It does not require complex hydrogeologic modeling nor detailed knowledge of soil characteristics, vegetation cover, or land-use practices. Long-term mean annual base recharge rates in excess of 110 mm/year resulted in the extreme eastern part of Nebraska. The western portion of the state expressed rates of only 15 to 20 mm annually, while the Sandhills region of north-central Nebraska was estimated to receive twice as much base recharge (40 to 50 mm/year) as areas south of it.

  3. The evolutionary rate dynamically tracks changes in HIV-1 epidemics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maljkovic-berry, Irina; Athreya, Gayathri; Daniels, Marcus

    Large-sequence datasets provide an opportunity to investigate the dynamics of pathogen epidemics. Thus, a fast method to estimate the evolutionary rate from large and numerous phylogenetic trees becomes necessary. Based on minimizing tip height variances, we optimize the root in a given phylogenetic tree to estimate the most homogenous evolutionary rate between samples from at least two different time points. Simulations showed that the method had no bias in the estimation of evolutionary rates and that it was robust to tree rooting and topological errors. We show that the evolutionary rates of HIV-1 subtype B and C epidemics have changedmore » over time, with the rate of evolution inversely correlated to the rate of virus spread. For subtype B, the evolutionary rate slowed down and tracked the start of the HAART era in 1996. Subtype C in Ethiopia showed an increase in the evolutionary rate when the prevalence increase markedly slowed down in 1995. Thus, we show that the evolutionary rate of HIV-1 on the population level dynamically tracks epidemic events.« less

  4. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  5. Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield

    Treesearch

    Robert B. Thomas

    1986-01-01

    Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...

  6. 45 CFR 284.11 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... METHODOLOGY FOR DETERMINING WHETHER AN INCREASE IN A STATE OR TERRITORY'S CHILD POVERTY RATE IS THE RESULT OF... estimating the number and percentage of children in poverty in each State. These methods may include national estimates based on the Current Population Survey; the Small Area Income and Poverty Estimates; the annual...

  7. Agent-based mathematical modeling as a tool for estimating Trypanosoma cruzi vector-host contact rates.

    PubMed

    Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M

    2015-11-01

    The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A user-friendly, low-cost turbidostat with versatile growth rate estimation based on an extended Kalman filter.

    PubMed

    Hoffmann, Stefan A; Wohltat, Christian; Müller, Kristian M; Arndt, Katja M

    2017-01-01

    For various experimental applications, microbial cultures at defined, constant densities are highly advantageous over simple batch cultures. Due to high costs, however, devices for continuous culture at freely defined densities still experience limited use. We have developed a small-scale turbidostat for research purposes, which is manufactured from inexpensive components and 3D printed parts. A high degree of spatial system integration and a graphical user interface provide user-friendly operability. The used optical density feedback control allows for constant continuous culture at a wide range of densities and offers to vary culture volume and dilution rates without additional parametrization. Further, a recursive algorithm for on-line growth rate estimation has been implemented. The employed Kalman filtering approach based on a very general state model retains the flexibility of the used control type and can be easily adapted to other bioreactor designs. Within several minutes it can converge to robust, accurate growth rate estimates. This is particularly useful for directed evolution experiments or studies on metabolic challenges, as it allows direct monitoring of the population fitness.

  9. A trait-based approach for predicting species responses to environmental change from sparse data: how well might terrestrial mammals track climate change?

    PubMed

    Santini, Luca; Cornulier, Thomas; Bullock, James M; Palmer, Stephen C F; White, Steven M; Hodgson, Jenny A; Bocedi, Greta; Travis, Justin M J

    2016-07-01

    Estimating population spread rates across multiple species is vital for projecting biodiversity responses to climate change. A major challenge is to parameterise spread models for many species. We introduce an approach that addresses this challenge, coupling a trait-based analysis with spatial population modelling to project spread rates for 15 000 virtual mammals with life histories that reflect those seen in the real world. Covariances among life-history traits are estimated from an extensive terrestrial mammal data set using Bayesian inference. We elucidate the relative roles of different life-history traits in driving modelled spread rates, demonstrating that any one alone will be a poor predictor. We also estimate that around 30% of mammal species have potential spread rates slower than the global mean velocity of climate change. This novel trait-space-demographic modelling approach has broad applicability for tackling many key ecological questions for which we have the models but are hindered by data availability. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  10. The method for homography estimation between two planes based on lines and points

    NASA Astrophysics Data System (ADS)

    Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry

    2018-04-01

    The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.

  11. Population genetics of autopolyploids under a mixed mating model and the estimation of selfing rate.

    PubMed

    Hardy, Olivier J

    2016-01-01

    Nowadays, the population genetics analysis of autopolyploid species faces many difficulties due to (i) limited development of population genetics tools under polysomic inheritance, (ii) difficulties to assess allelic dosage when genotyping individuals and (iii) a form of inbreeding resulting from the mechanism of 'double reduction'. Consequently, few data analysis computer programs are applicable to autopolyploids. To contribute bridging this gap, this article first derives theoretical expectations for the inbreeding and identity disequilibrium coefficients under polysomic inheritance in a mixed mating model. Moment estimators of these coefficients are proposed when exact genotypes or just markers phenotypes (i.e. allelic dosage unknown) are available. This led to the development of estimators of the selfing rate based on adult genotypes or phenotypes and applicable to any even-ploidy level. Their statistical performances and robustness were assessed by numerical simulations. Contrary to inbreeding-based estimators, the identity disequilibrium-based estimator using phenotypes is robust (absolute bias generally < 0.05), even in the presence of double reduction, null alleles or biparental inbreeding due to isolation by distance. A fairly good precision of the selfing rate estimates (root mean squared error < 0.1) is already achievable using a sample of 30-50 individuals phenotyped at 10 loci bearing 5-10 alleles each, conditions reachable using microsatellite markers. Diallelic markers (e.g. SNP) can also perform satisfactorily in diploids and tetraploids but more polymorphic markers are necessary for higher ploidy levels. The method is implemented in the software SPAGeDi and should contribute to reduce the lack of population genetics tools applicable to autopolyploids. © 2015 John Wiley & Sons Ltd.

  12. Validation of Satellite-based Rainfall Estimates for Severe Storms (Hurricanes & Tornados)

    NASA Astrophysics Data System (ADS)

    Nourozi, N.; Mahani, S.; Khanbilvardi, R.

    2005-12-01

    Severe storms such as hurricanes and tornadoes cause devastating damages, almost every year, over a large section of the United States. More accurate forecasting intensity and track of a heavy storm can help to reduce if not to prevent its damages to lives, infrastructure, and economy. Estimating accurate high resolution quantitative precipitation (QPE) from a hurricane, required to improve the forecasting and warning capabilities, is still a challenging problem because of physical characteristics of the hurricane even when it is still over the ocean. Satellite imagery seems to be a valuable source of information for estimating and forecasting heavy precipitation and also flash floods, particularly for over the oceans where the traditional ground-based gauge and radar sources cannot provide any information. To improve the capability of a rainfall retrieval algorithm for estimating QPE of severe storms, its product is evaluated in this study. High (hourly 4km x 4km) resolutions satellite infrared-based rainfall products, from the NESDIS Hydro-Estimator (HE) and also PERSIANN (Precipitation Estimation from Remotely Sensed Information using an Artificial Neural Networks) algorithms, have been tested against NEXRAD stage-IV and rain gauge observations in this project. Three strong hurricanes: Charley (category 4), Jeanne (category 3), and Ivan (category 3) that caused devastating damages over Florida in the summer 2004, have been considered to be investigated. Preliminary results demonstrate that HE tends to underestimate rain rates when NEXRAD shows heavy storm (rain rates greater than 25 mm/hr) and to overestimate when NEXRAD gives low rainfall amounts, but PERSIANN tends to underestimate rain rates, in general.

  13. Methods for Sexually Transmitted Disease Prevention Programs to Estimate the Health and Medical Cost Impact of Changes in Their Budget.

    PubMed

    Chesson, Harrell W; Ludovic, Jennifer A; Berruti, Andrés A; Gift, Thomas L

    2018-01-01

    The purpose of this article was to describe methods that sexually transmitted disease (STD) programs can use to estimate the potential effects of changes in their budgets in terms of disease burden and direct medical costs. We proposed 2 distinct approaches to estimate the potential effect of changes in funding on subsequent STD burden, one based on an analysis of state-level STD prevention funding and gonorrhea case rates and one based on analyses of the effect of Disease Intervention Specialist (DIS) activities on gonorrhea case rates. We also illustrated how programs can estimate the impact of budget changes on intermediate outcomes, such as partner services. Finally, we provided an example of the application of these methods for a hypothetical state STD prevention program. The methods we proposed can provide general approximations of how a change in STD prevention funding might affect the level of STD prevention services provided, STD incidence rates, and the direct medical cost burden of STDs. In applying these methods to a hypothetical state, a reduction in annual funding of US $200,000 was estimated to lead to subsequent increases in STDs of 1.6% to 3.6%. Over 10 years, the reduction in funding totaled US $2.0 million, whereas the cumulative, additional direct medical costs of the increase in STDs totaled US $3.7 to US $8.4 million. The methods we proposed, though subject to important limitations, can allow STD prevention personnel to calculate evidence-based estimates of the effects of changes in their budget.

  14. Genetic mapping of 15 human X chromosomal forensic short tandem repeat (STR) loci by means of multi-core parallelization.

    PubMed

    Diegoli, Toni Marie; Rohde, Heinrich; Borowski, Stefan; Krawczak, Michael; Coble, Michael D; Nothnagel, Michael

    2016-11-01

    Typing of X chromosomal short tandem repeat (X STR) markers has become a standard element of human forensic genetic analysis. Joint consideration of many X STR markers at a time increases their discriminatory power but, owing to physical linkage, requires inter-marker recombination rates to be accurately known. We estimated the recombination rates between 15 well established X STR markers using genotype data from 158 families (1041 individuals) and following a previously proposed likelihood-based approach that allows for single-step mutations. To meet the computational requirements of this family-based type of analysis, we modified a previous implementation so as to allow multi-core parallelization on a high-performance computing system. While we obtained recombination rate estimates larger than zero for all but one pair of adjacent markers within the four previously proposed linkage groups, none of the three X STR pairs defining the junctions of these groups yielded a recombination rate estimate of 0.50. Corroborating previous studies, our results therefore argue against a simple model of independent X chromosomal linkage groups. Moreover, the refined recombination fraction estimates obtained in our study will facilitate the appropriate joint consideration of all 15 investigated markers in forensic analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Population based screening for chronic kidney disease: cost effectiveness study.

    PubMed

    Manns, Braden; Hemmelgarn, Brenda; Tonelli, Marcello; Au, Flora; Chiasson, T Carter; Dong, James; Klarenbach, Scott

    2010-11-08

    To determine the cost effectiveness of one-off population based screening for chronic kidney disease based on estimated glomerular filtration rate. Cost utility analysis of screening with estimated glomerular filtration rate alone compared with no screening (with allowance for incidental finding of cases of chronic kidney disease). Analyses were stratified by age, diabetes, and the presence or absence of proteinuria. Scenario and sensitivity analyses, including probabilistic sensitivity analysis, were performed. Costs were estimated in all adults and in subgroups defined by age, diabetes, and hypertension. Publicly funded Canadian healthcare system. Large population based laboratory cohort used to estimate mortality rates and incidence of end stage renal disease for patients with chronic kidney disease over a five year follow-up period. Patients had not previously undergone assessment of glomerular filtration rate. Lifetime costs, end stage renal disease, quality adjusted life years (QALYs) gained, and incremental cost per QALY gained. Compared with no screening, population based screening for chronic kidney disease was associated with an incremental cost of $C463 (Canadian dollars in 2009; equivalent to about £275, €308, US $382) and a gain of 0.0044 QALYs per patient overall, representing a cost per QALY gained of $C104 900. In a cohort of 100 000 people, screening for chronic kidney disease would be expected to reduce the number of people who develop end stage renal disease over their lifetime from 675 to 657. In subgroups of people with and without diabetes, the cost per QALY gained was $C22 600 and $C572 000, respectively. In a cohort of 100 000 people with diabetes, screening would be expected to reduce the number of people who develop end stage renal disease over their lifetime from 1796 to 1741. In people without diabetes with and without hypertension, the cost per QALY gained was $C334 000 and $C1 411 100, respectively. Population based screening for chronic kidney disease with assessment of estimated glomerular filtration rate is not cost effective overall or in subgroups of people with hypertension or older people. Targeted screening of people with diabetes is associated with a cost per QALY that is similar to that accepted in other interventions funded by public healthcare systems.

  16. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  17. Evaluation of HYDRUS-1D for Estimating Evapotranspiration of Bell Pepper Regulated by Cloud-based Fertigation System in Greenhouse

    NASA Astrophysics Data System (ADS)

    Ito, Y.; Honda, R.; Takesako, H.; Ozawa, K.; Kita, E.; Kanno, M.; Noborio, K.

    2017-12-01

    A fertile surface layer, contaminated with radiocesium resulting from the accident of the Fukushima Daiichi Nuclear Power Plant in 2011, was removed and replaced by non-fertile soil in Fukushima farmlands. In a greenhouse, we used a commercially-available cloud-based fertigation system (CBFS) for regulating an application rate of liquid fertilizer to bell pepper grown in the non-fertile soil. Although the CBFS regulates the application rate based on a weekly trend of volumetric water content (Θw) remotely measured at the soil surface using a soil moisture sensor if all applied water being consumed by plants in a greenhouse is not known. Evapotranspiration of green pepper grown with the CBFS was estimated by HYDRUS-1D. Experiments in a greenhouse were conducted in Fukushima, Japan, from September 1st to October 31st in 2016. Bell pepper plants were transplanted in the begging of June in 2016. The Penman-Monteith equation was used to estimate evapotranspiration, representing transpiration since the soil surface was covered with plastic mulch. Time domain reflectometry (TDR) probes were horizontally installed to monitor changes in Θw at 5, 10, 20, and 30 cm deep from the soil surface. The van Genuchten-Mualem hydraulic model for water and heat flow in soil was used for HYDRUS-1D. A precipitation rate for the upper boundary condition was given as an irrigation rate. We assumed wind speed was always 0.6 m s-1 for the Penman-Monteith equation. The amount of evapotranspiration estimated with the Penman-Monteith equation agreed well with the amount of irrigated water measured. The evapotranspiration simulated with HYDRUS-1D agreed well with that estimated with the Penman-Monteith equation. However, Θw at all depth were underestimated with Hydrus-1D by approximately 0.05 m3 m-3 and differences of Θw between measured and estimated with HYDRYS-1D became larger at deeper the soil depths. This might be attributed to larger water flow occurred because of a free drainage used for the lower boundary condition. Although transpiration from plants should be measured directly to properly evaluate irrigation rate regulated by the CBFS, HYDRUS-1D was found to estimate evapotranspiration with enough accuracy. We will further evaluate the applicability of HYDRUS-1D to estimate evapotranspiration throughout a growing period.

  18. Heterogeneous Rates of Molecular Evolution and Diversification Could Explain the Triassic Age Estimate for Angiosperms.

    PubMed

    Beaulieu, Jeremy M; O'Meara, Brian C; Crane, Peter; Donoghue, Michael J

    2015-09-01

    Dating analyses based on molecular data imply that crown angiosperms existed in the Triassic, long before their undisputed appearance in the fossil record in the Early Cretaceous. Following a re-analysis of the age of angiosperms using updated sequences and fossil calibrations, we use a series of simulations to explore the possibility that the older age estimates are a consequence of (i) major shifts in the rate of sequence evolution near the base of the angiosperms and/or (ii) the representative taxon sampling strategy employed in such studies. We show that both of these factors do tend to yield substantially older age estimates. These analyses do not prove that younger age estimates based on the fossil record are correct, but they do suggest caution in accepting the older age estimates obtained using current relaxed-clock methods. Although we have focused here on the angiosperms, we suspect that these results will shed light on dating discrepancies in other major clades. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  20. Estimating Children's Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho.

    PubMed

    von Lindern, Ian; Spalinger, Susan; Stifelman, Marc L; Stanek, Lindsay Wichers; Bartrem, Casey

    2016-09-01

    Soil/dust ingestion rates are important variables in assessing children's health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose-response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children's blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/dust ingestion rates are 86-94 mg/day for 6-month- to 2-year-old children and 51-67 mg/day for 2- to 9-year-old children. Soil/dust ingestion rate estimates for 1- to 9-year-old children at the BHSS are lower than those commonly used in human health risk assessment. A substantial component of children's exposure comes from sources beyond the immediate home environment. von Lindern I, Spalinger S, Stifelman ML, Stanek LW, Bartrem C. 2016. Estimating children's soil/dust ingestion rates through retrospective analyses of blood lead biomonitoring from the Bunker Hill Superfund Site in Idaho. Environ Health Perspect 124:1462-1470; http://dx.doi.org/10.1289/ehp.1510144.

  1. TOPICAL REVIEW: Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors

    NASA Astrophysics Data System (ADS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Aronsson, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Atkinson, D. E.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barnum, S.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Beker, M. G.; Belczynski, K.; Benacquista, M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birindelli, S.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Blomberg, A.; Boccara, C.; Bock, O.; Bodiya, T. P.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Boyle, M.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Budzyński, R.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campagna, E.; Campsie, P.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R.; Corda, C.; Cornish, N.; Corsi, A.; Costa, C. A.; Coulon, J. P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dari, A.; Das, K.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Davis, A.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Degallaix, J.; del Prete, M.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Devanka, P.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Dorsher, S.; Douglas, E. S. D.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Dueck, J.; Dumas, J. C.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Engel, R.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flanigan, M.; Flasch, K.; Foley, S.; Forrest, C.; Forsi, E.; Fotopoulos, N.; Fournier, J. D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Gholami, I.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hall, P.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Howell, E.; Hoyland, D.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, H.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Krause, T.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kullman, J.; Kumar, R.; Kwee, P.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Leong, J.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Lin, H.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Luan, J.; Lubiński, M.; Lucianetti, A.; Lück, H.; Lundgren, A.; Machenschalk, B.; MacInnis, M.; Mackowski, J. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Mak, C.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIvor, G.; McKechan, D. J. A.; Meadors, G.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Morioka, T.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; MowLowry, C.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishizawa, A.; Nocera, F.; Nolting, D.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Pardi, S.; Pareja, M.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabaste, O.; Rabeling, D. S.; Radke, T.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Röver, C.; Rogstad, S.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Sakosky, M.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santostasi, G.; Saraf, S.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Stein, A. J.; Stein, L. C.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Tseng, K.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vaishnav, B.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yeaton-Massey, D.; Yoshida, S.; Yu, P. P.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2010-09-01

    We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo. Astrophysical estimates for compact-binary coalescence rates depend on a number of assumptions and unknown model parameters and are still uncertain. The most confident among these estimates are the rate predictions for coalescing binary neutron stars which are based on extrapolations from observed binary pulsars in our galaxy. These yield a likely coalescence rate of 100 Myr-1 per Milky Way Equivalent Galaxy (MWEG), although the rate could plausibly range from 1 Myr-1 MWEG-1 to 1000 Myr-1 MWEG-1 (Kalogera et al 2004 Astrophys. J. 601 L179; Kalogera et al 2004 Astrophys. J. 614 L137 (erratum)). We convert coalescence rates into detection rates based on data from the LIGO S5 and Virgo VSR2 science runs and projected sensitivities for our advanced detectors. Using the detector sensitivities derived from these data, we find a likely detection rate of 0.02 per year for Initial LIGO-Virgo interferometers, with a plausible range between 2 × 10-4 and 0.2 per year. The likely binary neutron-star detection rate for the Advanced LIGO-Virgo network increases to 40 events per year, with a range between 0.4 and 400 per year.

  2. Estimating 1 min rain rate distributions from numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Paulson, Kevin S.

    2017-01-01

    Internationally recognized prognostic models of rain fade on terrestrial and Earth-space EHF links rely fundamentally on distributions of 1 min rain rates. Currently, in Rec. ITU-R P.837-6, these distributions are generated using the Salonen-Poiares Baptista method where 1 min rain rate distributions are estimated from long-term average annual accumulations provided by numerical weather prediction (NWP). This paper investigates an alternative to this method based on the distribution of 6 h accumulations available from the same NWPs. Rain rate fields covering the UK, produced by the Nimrod network of radars, are integrated to estimate the accumulations provided by NWP, and these are linked to distributions of fine-scale rain rates. The proposed method makes better use of the available data. It is verified on 15 NWP regions spanning the UK, and the extension to other regions is discussed.

  3. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries.

    PubMed

    Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-09-01

    Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.

  4. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries

    PubMed Central

    Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-01-01

    Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805

  5. Methods for estimating comparable prevalence rates of food insecurity experienced by adults in 147 countries and areas

    NASA Astrophysics Data System (ADS)

    Nord, Mark; Cafiero, Carlo; Viviani, Sara

    2016-11-01

    Statistical methods based on item response theory are applied to experiential food insecurity survey data from 147 countries, areas, and territories to assess data quality and develop methods to estimate national prevalence rates of moderate and severe food insecurity at equal levels of severity across countries. Data were collected from nationally representative samples of 1,000 adults in each country. A Rasch-model-based scale was estimated for each country, and data were assessed for consistency with model assumptions. A global reference scale was calculated based on item parameters from all countries. Each country's scale was adjusted to the global standard, allowing for up to 3 of the 8 scale items to be considered unique in that country if their deviance from the global standard exceeded a set tolerance. With very few exceptions, data from all countries were sufficiently consistent with model assumptions to constitute reasonably reliable measures of food insecurity and were adjustable to the global standard with fair confidence. National prevalence rates of moderate-or-severe food insecurity assessed over a 12-month recall period ranged from 3 percent to 92 percent. The correlations of national prevalence rates with national income, health, and well-being indicators provide external validation of the food security measure.

  6. Using rates of oxygen and nitrate reduction to map the subsurface distribution of groundwater denitrification

    NASA Astrophysics Data System (ADS)

    Kolbe, T.; De Dreuzy, J. R.; Abbott, B. W.; Aquilina, L.; Babey, T.; Green, C. T.; Fleckenstein, J. H.; Labasque, T.; Laverman, A.; Marçais, J.; Peiffer, S.; Thomas, Z.; Pinay, G.

    2017-12-01

    Widespread fertilizer application over the last 70 years has caused serious ecological and socioeconomic problems in aquatic and estuarine ecosystems. When surplus nitrogen leaches as nitrate (a major groundwater pollutant) to the aquifer, complex flow dynamics and naturally occurring degradation processes control its transport. Under the conditions of depleted oxygen and abundant electron donors, microorganisms reduce NO3- to N2 (denitrification). Denitrification rates vary over orders of magnitude among sites within the same aquifer, complicating estimation of denitrification capacity at the catchment scale. Because it is impractical or impossible to access the subsurface to directly quantify denitrification rates, reactivity is often assumed to occur continuous along flowlines, potentially resulting in substantial over- or underestimation of denitrification. Here we investigated denitrification in an unconfined crystalline aquifer in western France using a combination of common tracers (chlorofluorocarbons, O2, NO3-, and N2) measured in 16 wells to inform a time-based modeling approach. We found that spatially variable denitrification rates arise from the intersection of nitrate rich water with reactive zones defined by the abundance of electron donors (primarily pyrite). Furthermore, based on observed reaction rates of the sequential reduction of oxygen and nitrate, we present a general framework to estimate the location and intensity of the reactive zone in aquifers. Accounting for the vertical distribution of reaction rates results in large differences in estimations of net denitrification rates that assume homogeneous reactivity. This new framework provides a tractable approach for quantifying catchment and regional groundwater denitrification rates that could be used to improve estimation of groundwater resilience to nitrate pollution and develop more realistic management strategies.

  7. Generalisability and Cost-Impact of Antibiotic-Impregnated Central Venous Catheters for Reducing Risk of Bloodstream Infection in Paediatric Intensive Care Units in England.

    PubMed

    Harron, Katie; Mok, Quen; Hughes, Dyfrig; Muller-Pebody, Berit; Parslow, Roger; Ramnarayan, Padmanabhan; Gilbert, Ruth

    2016-01-01

    We determined the generalisability and cost-impact of adopting antibiotic-impregnated CVCs in all paediatric intensive care units (PICUs) in England, based on results from a large randomised controlled trial (the CATCH trial; ISRCTN34884569). BSI rates using standard CVCs were estimated through linkage of national PICU audit data (PICANet) with laboratory surveillance data. We estimated the number of BSI averted if PICUs switched from standard to antibiotic-impregnated CVCs by applying the CATCH trial rate-ratio (0.40; 95% CI 0.17,0.97) to the BSI rate using standard CVCs. The value of healthcare resources made available by averting one BSI as estimated from the trial economic analysis was £10,975; 95% CI -£2,801,£24,751. The BSI rate using standard CVCs was 4.58 (95% CI 4.42,4.74) per 1000 CVC-days in 2012. Applying the rate-ratio gave 232 BSI averted using antibiotic CVCs. The additional cost of purchasing antibiotic-impregnated compared with standard CVCs was £36 for each child, corresponding to additional costs of £317,916 for an estimated 8831 CVCs required in PICUs in 2012. Based on 2012 BSI rates, management of BSI in PICUs cost £2.5 million annually (95% uncertainty interval: -£160,986, £5,603,005). The additional cost of antibiotic CVCs would be less than the value of resources associated with managing BSI in PICUs with standard BSI rates >1.2 per 1000 CVC-days. The cost of introducing antibiotic-impregnated CVCs is less than the cost associated with managing BSIs occurring with standard CVCs. The long-term benefits of preventing BSI could mean that antibiotic CVCs are cost-effective even in PICUs with extremely low BSI rates.

  8. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    NASA Astrophysics Data System (ADS)

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    2017-07-01

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200 % for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18 % difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.

  9. Estimating diabetes prevalence by small area in England.

    PubMed

    Congdon, Peter

    2006-03-01

    Diabetes risk is linked to both deprivation and ethnicity, and so prevalence will vary considerably between areas. Prevalence differences may partly account for geographic variation in health performance indicators for diabetes, which are based on age standardized hospitalization or operation rates. A positive correlation between prevalence and health outcomes indicates that the latter are not measuring only performance. A regression analysis of prevalence rates according to age, sex and ethnicity from the Health Survey for England (HSE) is undertaken and used (together with census data) to estimate diabetes prevalence for 354 English local authorities and 8000 smaller areas (electoral wards). An adjustment for social factors is based on a prevalence gradient over area-deprivation quintiles. A Bayesian estimation approach is used allowing simple inclusion of evidence on prevalence from other or historical sources. The estimated prevalent population in England is 1.5 million (188 000 type 1 and 1.341 million type 2). At strategic health authority (StHA) level, prevalence varies from 2.4 (Thames Valley) to 4 per cent (North East London). The prevalence estimates are used to assess variations between local authorities in adverse hospitalization indicators for diabetics and to assess the relationship between diabetes-related mortality and prevalence. In particular, rates of diabetic ketoacidosis (DKA) and coma are positively correlated with prevalence, while diabetic amputation rates are not. The methodology developed is applicable to developing small-area-prevalence estimates for a range of chronic diseases, when health surveys assess prevalence by demographic categories. In the application to diabetes prevalence, there is evidence that performance indicators as currently calculated are not corrected for prevalence.

  10. Creatinine Clearance Is Not Equal to Glomerular Filtration Rate and Cockcroft-Gault Equation Is Not Equal to CKD-EPI Collaboration Equation.

    PubMed

    Fernandez-Prado, Raul; Castillo-Rodriguez, Esmeralda; Velez-Arribas, Fernando Javier; Gracia-Iguacel, Carolina; Ortiz, Alberto

    2016-12-01

    Direct oral anticoagulants (DOACs) may require dose reduction or avoidance when glomerular filtration rate is low. However, glomerular filtration rate is not usually measured in routine clinical practice. Rather, equations that incorporate different variables use serum creatinine to estimate either creatinine clearance in mL/min or glomerular filtration rate in mL/min/1.73 m 2 . The Cockcroft-Gault equation estimates creatinine clearance and incorporates weight into the equation. By contrast, the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations estimate glomerular filtration rate and incorporate ethnicity but not weight. As a result, an individual patient may have very different renal function estimates, depending on the equation used. We now highlight these differences and discuss the impact on routine clinical care for anticoagulation to prevent embolization in atrial fibrillation. Pivotal DOAC clinical trials used creatinine clearance as a criterion for patient enrollment, and dose adjustment and Federal Drug Administration recommendations are based on creatinine clearance. However, clinical biochemistry laboratories provide CKD-EPI glomerular filtration rate estimations, resulting in discrepancies between clinical trial and routine use of the drugs. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. HIV, HCV, HBV, and syphilis among transgender women from Brazil: Assessing different methods to adjust infection rates of a hard-to-reach, sparse population.

    PubMed

    Bastos, Francisco I; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; Dos Santos, Camila Mattos; Malta, Monica Siqueira

    2018-05-01

    Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates.In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects.We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates.

  12. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  13. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  14. Assembly-line Simulation Program

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Zendejas, Silvino; Malhotra, Shan

    1987-01-01

    Costs and profits estimated for models based on user inputs. Standard Assembly-line Manufacturing Industry Simulation (SAMIS) program generalized so useful for production-line manufacturing companies. Provides accurate and reliable means of comparing alternative manufacturing processes. Used to assess impact of changes in financial parameters as cost of resources and services, inflation rates, interest rates, tax policies, and required rate of return of equity. Most important capability is ability to estimate prices manufacturer would have to receive for its products to recover all of costs of production and make specified profit. Written in TURBO PASCAL.

  15. Sampling effort and estimates of species richness based on prepositioned area electrofisher samples

    USGS Publications Warehouse

    Bowen, Z.H.; Freeman, Mary C.

    1998-01-01

    Estimates of species richness based on electrofishing data are commonly used to describe the structure of fish communities. One electrofishing method for sampling riverine fishes that has become popular in the last decade is the prepositioned area electrofisher (PAE). We investigated the relationship between sampling effort and fish species richness at seven sites in the Tallapoosa River system, USA based on 1,400 PAE samples collected during 1994 and 1995. First, we estimated species richness at each site using the first-order jackknife and compared observed values for species richness and jackknife estimates of species richness to estimates based on historical collection data. Second, we used a permutation procedure and nonlinear regression to examine rates of species accumulation. Third, we used regression to predict the number of PAE samples required to collect the jackknife estimate of species richness at each site during 1994 and 1995. We found that jackknife estimates of species richness generally were less than or equal to estimates based on historical collection data. The relationship between PAE electrofishing effort and species richness in the Tallapoosa River was described by a positive asymptotic curve as found in other studies using different electrofishing gears in wadable streams. Results from nonlinear regression analyses indicted that rates of species accumulation were variable among sites and between years. Across sites and years, predictions of sampling effort required to collect jackknife estimates of species richness suggested that doubling sampling effort (to 200 PAEs) would typically increase observed species richness by not more than six species. However, sampling effort beyond about 60 PAE samples typically increased observed species richness by < 10%. We recommend using historical collection data in conjunction with a preliminary sample size of at least 70 PAE samples to evaluate estimates of species richness in medium-sized rivers. Seventy PAE samples should provide enough information to describe the relationship between sampling effort and species richness and thus facilitate evaluation of a sampling effort.

  16. CV-22 Human Vibration Evaluation

    DTIC Science & Technology

    2008-04-01

    for comfort and health risk based on the standard guidelines . The overall rating for comfort was marginal. The overall rating for health risk was... guidelines . Data were collected on one day during two sorties. At all three stations, the majority of flight test conditions were estimated to be...fairly uncomfortable” according to the ISO 2631-1: 1997 guidelines , which are independent of time. Based on this assessment, the overall rating for

  17. 77 FR 73662 - Agency Information Collection Activities; Submission for Office of Management and Budget Review...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-11

    ... received by the Agency under the Pre-IDE program over the past 10 years. Based on FDA's experience with the... rate and reach a steady state of approximately 2,544 submissions per year. FDA estimates from past... annual estimate of 2,544 submissions is based on experienced trends over the past several years. FDA's...

  18. Need-Based Aid and College Persistence: The Effects of the Ohio College Opportunity Grant

    ERIC Educational Resources Information Center

    Bettinger, Eric

    2015-01-01

    This article exploits a natural experiment to estimate the effects of need-based aid policies on first-year college persistence rates. In fall 2006, Ohio abruptly adopted a new state financial aid policy that was significantly more generous than the previous plan. Using student-level data and very narrowly defined sets of students, I estimate a…

  19. Estimating the greenhouse gas benefits of forestry projects: A Costa Rican Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, Christopher; Sathaye, Jayant; Sanchez Azofeifa, G. Arturo

    If the Clean Development Mechanism proposed under the Kyoto Protocol is to serve as an effective means for combating global climate change, it will depend upon reliable estimates of greenhouse gas benefits. This paper sketches the theoretical basis for estimating the greenhouse gas benefits of forestry projects and suggests lessons learned based on a case study of Costa Rica's Protected Areas Project, which is a 500,000 hectare effort to reduce deforestation and enhance reforestation. The Protected Areas Project in many senses advances the state of the art for Clean Development Mechanism-type forestry projects, as does the third-party verification work ofmore » SGS International Certification Services on the project. Nonetheless, sensitivity analysis shows that carbon benefit estimates for the project vary widely based on the imputed deforestation rate in the baseline scenario, e.g. the deforestation rate expected if the project were not implemented. This, along with a newly available national dataset that confirms other research showing a slower rate of deforestation in Costa Rica, suggests that the use of the 1979--1992 forest cover data originally as the basis for estimating carbon savings should be reconsidered. When the newly available data is substituted, carbon savings amount to 8.9 Mt (million tones) of carbon, down from the original estimate of 15.7 Mt. The primary general conclusion is that project developers should give more attention to the forecasting land use and land cover change scenarios underlying estimates of greenhouse gas benefits.« less

  20. Utility of Equations to Estimate Peak Oxygen Uptake and Work Rate From a 6-Minute Walk Test in Patients With COPD in a Clinical Setting.

    PubMed

    Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G

    2015-01-01

    To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.

  1. Agreement and Reliability of Tinnitus Loudness Matching and Pitch Likeness Rating

    PubMed Central

    Hoare, Derek J.; Edmondson-Jones, Mark; Gander, Phillip E.; Hall, Deborah A.

    2014-01-01

    The ability to reproducibly match tinnitus loudness and pitch is important to research and clinical management. Here we examine agreement and reliability of tinnitus loudness matching and pitch likeness ratings when using a computer-based method to measure the tinnitus spectrum and estimate a dominant tinnitus pitch, using tonal or narrowband sounds. Group level data indicated a significant effect of time between test session 1 and 2 for loudness matching, likely procedural or perceptual learning, which needs to be accounted in study design. Pitch likeness rating across multiple frequencies appeared inherently more variable and with no systematic effect of time. Dominant pitch estimates reached a level of clinical acceptability when sessions were spaced two weeks apart. However when dominant tinnitus pitch assessments were separated by three months, acceptable agreement was achieved only for group mean data, not for individual estimates. This has implications for prescription of some sound-based interventions that rely on accurate measures of individual dominant tinnitus pitch. PMID:25478690

  2. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  3. Geochemical Evidence for Calcification from the Drake Passage Time-series

    NASA Astrophysics Data System (ADS)

    Munro, D. R.; Lovenduski, N. S.; Takahashi, T.; Stephens, B. B.; Newberger, T.; Dierssen, H. M.; Randolph, K. L.; Freeman, N. M.; Bushinsky, S. M.; Key, R. M.; Sarmiento, J. L.; Sweeney, C.

    2016-12-01

    Satellite imagery suggests high particulate inorganic carbon within a circumpolar region north of the Antarctic Polar Front (APF), but in situ evidence for calcification in this region is sparse. Given the geochemical relationship between calcification and total alkalinity (TA), seasonal changes in surface concentrations of potential alkalinity (PA), which accounts for changes in TA due to variability in salinity and nitrate, can be used as a means to evaluate satellite-based calcification algorithms. Here, we use surface carbonate system measurements collected from 2002 to 2016 for the Drake Passage Time-series (DPT) to quantify rates of calcification across the Antarctic Circumpolar Current. We also use vertical PA profiles collected during two cruises across the Drake Passage in March 2006 and September 2009 to estimate the calcium carbonate to organic carbon export ratio. We find geochemical evidence for calcification both north and south of the APF with the highest rates observed north of the APF. Calcification estimates from the DPT are compared to satellite-based estimates and estimates based on hydrographic data from other regions around the Southern Ocean.

  4. Comparing two survey methods for estimating maternal and perinatal mortality in rural Cambodia.

    PubMed

    Chandy, Hoeuy; Heng, Yang Van; Samol, Ha; Husum, Hans

    2008-03-01

    We need solid estimates of maternal mortality rates (MMR) to monitor the impact of maternal care programs. Cambodian health authorities and WHO report the MMR in Cambodia at 450 per 100,000 live births. The figure is drawn from surveys where information is obtained by interviewing respondents about the survival of all their adult sisters (sisterhood method). The estimate is statistically imprecise, 95% confidence intervals ranging from 260 to 620/100,000. The MMR estimate is also uncertain due to under-reporting; where 80-90% of women deliver at home maternal fatalities may go undetected especially where mortality is highest, in remote rural areas. The aim of this study was to attain more reliable MMR estimates by using survey methods other than the sisterhood method prior to an intervention targeting obstetric rural emergencies. The study was carried out in rural Northwestern Cambodia where access to health services is poor and poverty, endemic diseases, and land mines are endemic. Two survey methods were applied in two separate sectors: a community-based survey gathering data from public sources and a household survey gathering data direct from primary sources. There was no statistically significant difference between the two survey results for maternal deaths, both types of survey reported mortality rates around the public figure. The household survey reported a significantly higher perinatal mortality rate as compared to the community-based survey, 8.6% versus 5.0%. Also the household survey gave qualitative data important for a better understanding of the many problems faced by mothers giving birth in the remote villages. There are detection failures in both surveys; the failure rate may be as high as 30-40%. PRINCIPLE CONCLUSION: Both survey methods are inaccurate, therefore inappropriate for evaluation of short-term changes of mortality rates. Surveys based on primary informants yield qualitative information about mothers' hardships important for the design of future maternal care interventions.

  5. The use of a robust capture-recapture design in small mammal population studies: A field example with Microtus pennsylvanicus

    USGS Publications Warehouse

    Nichols, James D.; Pollock, Kenneth H.; Hines, James E.

    1984-01-01

    The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.

  6. Outcome-based ventilation: A framework for assessing performance, health, and energy impacts to inform office building ventilation decisions.

    PubMed

    Rackes, A; Ben-David, T; Waring, M S

    2018-07-01

    This article presents an outcome-based ventilation (OBV) framework, which combines competing ventilation impacts into a monetized loss function ($/occ/h) used to inform ventilation rate decisions. The OBV framework, developed for U.S. offices, considers six outcomes of increasing ventilation: profitable outcomes realized from improvements in occupant work performance and sick leave absenteeism; health outcomes from occupant exposure to outdoor fine particles and ozone; and energy outcomes from electricity and natural gas usage. We used the literature to set low, medium, and high reference values for OBV loss function parameters, and evaluated the framework and outcome-based ventilation rates using a simulated U.S. office stock dataset and a case study in New York City. With parameters for all outcomes set at medium values derived from literature-based central estimates, higher ventilation rates' profitable benefits dominated negative health and energy impacts, and the OBV framework suggested ventilation should be ≥45 L/s/occ, much higher than the baseline ~8.5 L/s/occ rate prescribed by ASHRAE 62.1. Only when combining very low parameter estimates for profitable impacts with very high ones for health and energy impacts were all outcomes on the same order. Even then, however, outcome-based ventilation rates were often twice the baseline rate or more. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Post-licensure rapid immunization safety monitoring program (PRISM) data characterization.

    PubMed

    Baker, Meghan A; Nguyen, Michael; Cole, David V; Lee, Grace M; Lieu, Tracy A

    2013-12-30

    The Post-Licensure Rapid Immunization Safety Monitoring (PRISM) program is the immunization safety monitoring component of FDA's Mini-Sentinel project, a program to actively monitor the safety of medical products using electronic health information. FDA sought to assess the surveillance capabilities of this large claims-based distributed database for vaccine safety surveillance by characterizing the underlying data. We characterized data available on vaccine exposures in PRISM, estimated how much additional data was gained by matching with select state and local immunization registries, and compared vaccination coverage estimates based on PRISM data with other available data sources. We generated rates of computerized codes representing potential health outcomes relevant to vaccine safety monitoring. Standardized algorithms including ICD-9 codes, number of codes required, exclusion criteria and location of the encounter were used to obtain the background rates. The majority of the vaccines routinely administered to infants, children, adolescents and adults were well captured by claims data. Immunization registry data in up to seven states comprised between 5% and 9% of data for all vaccine categories with the exception of 10% for hepatitis B and 3% and 4% for rotavirus and zoster respectively. Vaccination coverage estimates based on PRISM's computerized data were similar to but lower than coverage estimates from the National Immunization Survey and Healthcare Effectiveness Data and Information Set. For the 25 health outcomes of interest studied, the rates of potential outcomes based on ICD-9 codes were generally higher than rates described in the literature, which are typically clinically confirmed cases. PRISM program's data on vaccine exposures and health outcomes appear complete enough to support robust safety monitoring. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Unit costs of medium and heavy truck crashes.

    DOT National Transportation Integrated Search

    2008-03-01

    This study provides the latest estimates of unit costs for highway crashes involving medium/heavy trucks by severity. Based on the latest data available, the estimated cost of police-reported crashes involving trucks with a gross weight rating of mor...

  9. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.

  10. Burden of typhoid fever in low-income and middle-income countries: a systematic, literature-based update with risk-factor adjustment.

    PubMed

    Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F

    2014-10-01

    No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.

  11. Smoking rate and periodontal disease prevalence: 40-year trends in Sweden 1970-2010.

    PubMed

    Bergstrom, Jan

    2014-10-01

    To investigate the relationship between smoking rate and periodontal disease prevalence in Sweden. National smoking rates were found from Swedish National Statistics on smoking habits. Based on smoking rates for the years 1970-2010, periodontal disease prevalence estimates were calculated for the age bracket 40-70 years and smoking-associated relative risks between 2.0 and 20.0. The impact of smoking on the population was estimated according to the concept of population attributable fraction. The age-standardized smoking rate in Sweden declined from 44% in 1970 to 15% in 2010. In parallel with the smoking decline the calculated prevalence estimate of periodontal disease dropped from 26% to 12% assuming a 10-fold smoking-associated relative risk. Even at more moderate magnitudes of the relative risk, e.g. 2-fold or 5-fold, the prevalence decrease was quite tangible, suggesting that the current prevalence in Sweden is about 20-50% of the level 40 years ago. The population attributable fraction, estimating the portion of the disease that would have been avoided in the absence of smoking, was 80% in 1970 and 58% in 2010 at a ten-fold relative risk. Calculated estimates of periodontal disease prevalence are closely related to real changes in smoking rate. As smoking rate drops periodontal disease prevalence will drop. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments

    USDA-ARS?s Scientific Manuscript database

    Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...

  13. ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS. II. ACID AND GENERAL BASE CATALYZED HYDROLYSIS

    EPA Science Inventory

    SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to calculate acid and neutral hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition states of a ...

  14. Three-dimensional dominant frequency mapping using autoregressive spectral analysis of atrial electrograms of patients in persistent atrial fibrillation.

    PubMed

    Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S

    2016-03-08

    Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.

  15. On the rate and causes of twentieth century sea-level rise.

    PubMed

    Miller, Laury; Douglas, Bruce C

    2006-04-15

    Both the rate and causes of twentieth century global sea-level rise (GSLR) have been controversial. Estimates from tide-gauges range from less than one, to more than two millimetre yr(-1). In contrast, values based on the processes mostly responsible for GSLR-mass increase (from mountain glaciers and the great high latitude ice masses) and volume increase (expansion due to ocean warming)-fall below this range. Either the gauge estimates are too high, or one (or both) of the component estimates is too low. Gauge estimates of GSLR have been in dispute for several decades because of vertical land movements, especially due to glacial isostatic adjustment (GIA). More recently, the possibility has been raised that coastal tide-gauges measure exaggerated rates of sea-level rise because of localized ocean warming. Presented here are two approaches to a resolution of these problems. The first is morphological, based on the limiting values of observed trends of twentieth century relative sea-level rise as a function of distance from the centres of the ice loads at last glacial maximum. This observational approach, which does not depend on a geophysical model of GIA, supports values of GSLR near 2 mm yr(-1). The second approach involves an analysis of long records of tide-gauge and hydrographic (in situ temperature and salinity) observations in the Pacific and Atlantic Oceans. It was found that sea-level trends from tide-gauges, which reflect both mass and volume change, are 2-3 times higher than rates based on hydrographic data which reveal only volume change. These results support those studies that put the twentieth century rate near 2 mm yr(-1), thereby indicating that mass increase plays a much larger role than ocean warming in twentieth century GSLR.

  16. A fuel-based approach for emission factor development for highway paving construction equipment in China.

    PubMed

    Li, Zhen; Zhang, Kaishan; Pang, Kaili; Di, Baofeng

    2016-12-01

    The objective of this paper is to develop and demonstrate a fuel-based approach for emissions factor estimation for highway paving construction equipment in China for better accuracy. A highway construction site in Chengdu was selected for this study with NO emissions being characterized and demonstrated. Four commonly used paving equipment, i.e., three rollers and one paver were selected in this study. A portable emission measurement system (PEMS) was developed and used for emission measurements of selected equipment during real-world highway construction duties. Three duty modes were defined to characterize the NO emissions, i.e., idling, moving, and working. In order to develop a representative emission factor for these highway construction equipment, composite emission factors were estimated using modal emission rates and the corresponding modal durations in the process of typical construction duties. Depending on duty mode and equipment type, NO emission rate ranged from 2.6-63.7mg/s and 6.0-55.6g/kg-fuel with the fuel consumption ranging from 0.31-4.52 g/s correspondingly. The NO composite emission factor was estimated to be 9-41mg/s with the single-drum roller being the highest and double-drum roller being the lowest and 6-30g/kg-fuel with the pneumatic tire roller being the highest while the double-drum roller being the lowest. For the paver, both time-based and fuel consumption-based NO composite emission rates are higher than all of the rollers with 56mg/s and 30g/kg-fuel, respectively. In terms of time-based quantity, the working mode contributes more than the other modes with idling being the least for both emissions and fuel consumption. In contrast, the fuel-based emission rate appears to have less variability in emissions. Thus, in order to estimate emission factors for emission inventory development, the fuel-based emission factor may be selected for better accuracy. The fuel-based composite emissions factors will be less variable and more accurate than time-based emission factors. As a consequence, emissions inventory developed using this approach will be more accurate and practical.

  17. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  18. Estimating mother-to-child HIV transmission rates in Cameroon in 2011: a computer simulation approach.

    PubMed

    Nguefack, Hermine L Nguena; Gwet, Henri; Desmonde, Sophie; Oukem-Boyer, Odile Ouwe Missi; Nkenfou, Céline; Téjiokem, Mathurin; Tchendjou, Patrice; Domkam, Irénée; Leroy, Valériane; Alioum, Ahmadou

    2016-01-12

    Despite the progress in the Prevention of the Mother-to-Child Transmission of HIV (PMTCT), the paediatric HIV epidemic remains worrying in Cameroon. HIV prevalence rate for the population of pregnant women was 7.6% in 2010 in Cameroon. The extent of the paediatric HIV epidemic is needed to inform policymakers. We developed a stochastic simulation model to estimate the number of new paediatric HIV infections through MTCT based on the observed uptake of services during the different steps of the PMTCT cascade in Cameroon in 2011. Different levels of PMTCT uptake was also assessed. A discrete events computer simulation-based approach with stochastic structure was proposed to generate a cohort of pregnant women followed-up until 6 weeks post-partum, and optionally until complete breastfeeding cessation in both prevalent and incident lactating HIV-infected women. The different parameters of the simulation model were fixed using data sources available from the 2011 national registry surveys, and from external cohorts in Cameroon. Different PMTCT coverages were simulated to assess their impact on MTCT. Available data show a low coverage of PMTCT services in Cameroon in 2011. Based on a simulation approach on a population of 995, 533 pregnant women, the overall residual MTCT rate in 2011 was estimated to be 22.1% (95 % CI: 18.6%-25.2%), the 6-week perinatal MTCT rate among prevalent HIV-infected mothers at delivery is estimated at 12.1% (95% CI: 8.1%-15.1%), with an additional postnatal MTCT rate estimated at 13.3% (95% CI: 9.3%-17.8%). The MTCT rate among children whose mothers seroconverted during breastfeeding was estimated at 20.8% (95% CI: 14.1%-26.9%). Overall, we estimated the number of new HIV infections in children in Cameroon to be 10, 403 (95% CI: 9, 054-13, 345) in 2011. When PMTCT uptake have been fixed at 100%, 90% and 80%, global MTCT rate failed to 0.9% (9% CI: 0.5%-1.7%), 2.0% (95% CI: 0.9%-3.2%) and 4.3% (95% CI: 2.4%-6.7%) respectively. This model is helpful to provide MTCT estimates to guide the national HIV policy in Cameroon. Increasing supply and uptake of PMTCT services among prevalent HIV infected pregnant women, as well as HIV-prevention interventions including the offer and acceptance of HIV testing and counselling in lactating women could reduce significantly the residual HIV MTCT in Cameroon. A public health effort should be made to encourage health care workers and pregnant women to use PMTCT services until complete breastfeeding cessation.

  19. Empirical Bayes estimation of proportions with application to cowbird parasitism rates

    USGS Publications Warehouse

    Link, W.A.; Hahn, D.C.

    1996-01-01

    Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).

  20. Pros and cons of estimating the reproduction number from early epidemic growth rate of influenza A (H1N1) 2009.

    PubMed

    Nishiura, Hiroshi; Chowell, Gerardo; Safan, Muntaser; Castillo-Chavez, Carlos

    2010-01-07

    In many parts of the world, the exponential growth rate of infections during the initial epidemic phase has been used to make statistical inferences on the reproduction number, R, a summary measure of the transmission potential for the novel influenza A (H1N1) 2009. The growth rate at the initial stage of the epidemic in Japan led to estimates for R in the range 2.0 to 2.6, capturing the intensity of the initial outbreak among school-age children in May 2009. An updated estimate of R that takes into account the epidemic data from 29 May to 14 July is provided. An age-structured renewal process is employed to capture the age-dependent transmission dynamics, jointly estimating the reproduction number, the age-dependent susceptibility and the relative contribution of imported cases to secondary transmission. Pitfalls in estimating epidemic growth rates are identified and used for scrutinizing and re-assessing the results of our earlier estimate of R. Maximum likelihood estimates of R using the data from 29 May to 14 July ranged from 1.21 to 1.35. The next-generation matrix, based on our age-structured model, predicts that only 17.5% of the population will experience infection by the end of the first pandemic wave. Our earlier estimate of R did not fully capture the population-wide epidemic in quantifying the next-generation matrix from the estimated growth rate during the initial stage of the pandemic in Japan. In order to quantify R from the growth rate of cases, it is essential that the selected model captures the underlying transmission dynamics embedded in the data. Exploring additional epidemiological information will be useful for assessing the temporal dynamics. Although the simple concept of R is more easily grasped by the general public than that of the next-generation matrix, the matrix incorporating detailed information (e.g., age-specificity) is essential for reducing the levels of uncertainty in predictions and for assisting public health policymaking. Model-based prediction and policymaking are best described by sharing fundamental notions of heterogeneous risks of infection and death with non-experts to avoid potential confusion and/or possible misuse of modelling results.

  1. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  2. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  3. Estimating HIV Incidence Using a Cross-Sectional Survey: Comparison of Three Approaches in a Hyperendemic Setting, Ndhiwa Subcounty, Kenya, 2012.

    PubMed

    Blaizot, Stéphanie; Kim, Andrea A; Zeh, Clement; Riche, Benjamin; Maman, David; De Cock, Kevin M; Etard, Jean-François; Ecochard, René

    2017-05-01

    Estimating HIV incidence is critical for identifying groups at risk for HIV infection, planning and targeting interventions, and evaluating these interventions over time. The use of reliable estimation methods for HIV incidence is thus of high importance. The aim of this study was to compare methods for estimating HIV incidence in a population-based cross-sectional survey. The incidence estimation methods evaluated included assay-derived methods, a testing history-derived method, and a probability-based method applied to data from the Ndhiwa HIV Impact in Population Survey (NHIPS). Incidence rates by sex and age and cumulative incidence as a function of age were presented. HIV incidence ranged from 1.38 [95% confidence interval (CI) 0.67-2.09] to 3.30 [95% CI 2.78-3.82] per 100 person-years overall; 0.59 [95% CI 0.00-1.34] to 2.89 [95% CI 0.86-6.45] in men; and 1.62 [95% CI 0.16-6.04] to 4.03 [95% CI 3.30-4.77] per 100 person-years in women. Women had higher incidence rates than men for all methods. Incidence rates were highest among women aged 15-24 and 25-34 years and highest among men aged 25-34 years. Comparison of different methods showed variations in incidence estimates, but they were in agreement to identify most-at-risk groups. The use and comparison of several distinct approaches for estimating incidence are important to provide the best-supported estimate of HIV incidence in the population.

  4. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  5. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    2016-04-22

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  6. Piecewise SALT sampling for estimating suspended sediment yields

    Treesearch

    Robert B. Thomas

    1989-01-01

    A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...

  7. Variable input observer for state estimation of high-rate dynamics

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob

    2017-04-01

    High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.

  8. Microprocessor realizations of range rate filters

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The performance of five digital range rate filters is evaluated. A range rate filter receives an input of range data from a radar unit and produces an output of smoothed range data and its estimated derivative range rate. The filters are compared through simulation on an IBM 370. Two of the filter designs are implemented on a 6800 microprocessor-based system. Comparisons are made on the bases of noise variance reduction ratios and convergence times of the filters in response to simulated range signals.

  9. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  10. Evaluation of X-band polarimetric radar estimation of rainfall and rain drop size distribution parameters in West Africa

    NASA Astrophysics Data System (ADS)

    Koffi, A. K.; Gosset, M.; Zahiri, E.-P.; Ochou, A. D.; Kacou, M.; Cazenave, F.; Assamoi, P.

    2014-06-01

    As part of the African Monsoon Multidisciplinary Analysis (AMMA) field campaign an X-band dual-polarization Doppler radar was deployed in Benin, West-Africa, in 2006 and 2007, together with a reinforced rain gauge network and several optical disdrometers. Based on this data set, a comparative study of several rainfall estimators that use X-band polarimetric radar data is presented. In tropical convective systems as encountered in Benin, microwave attenuation by rain is significant and quantitative precipitation estimation (QPE) at X-band is a challenge. Here, several algorithms based on the combined use of reflectivity, differential reflectivity and differential phase shift are evaluated against rain gauges and disdrometers. Four rainfall estimators were tested on twelve rainy events: the use of attenuation corrected reflectivity only (estimator R(ZH)), the use of the specific phase shift only R(KDP), the combination of specific phase shift and differential reflectivity R(KDP,ZDR) and an estimator that uses three radar parameters R(ZH,ZDR,KDP). The coefficients of the power law relationships between rain rate and radar variables were adjusted either based on disdrometer data and simulation, or on radar-gauges observations. The three polarimetric based algorithms with coefficients predetermined on observations outperform the R(ZH) estimator for rain rates above 10 mm/h which explain most of the rainfall in the studied region. For the highest rain rates (above 30 mm/h) R(KDP) shows even better scores, and given its performances and its simplicity of implementation, is recommended. The radar based retrieval of two parameters of the rain drop size distribution, the normalized intercept parameter NW and the volumetric median diameter Dm was evaluated on four rainy days thanks to disdrometers. The frequency distributions of the two parameters retrieved by the radar are very close to those observed with the disdrometer. NW retrieval based on a combination of ZH-KDP-ZDR works well whatever the a priori assumption made on the drop shapes. Dm retrieval based on ZDR alone performs well, but if satisfactory ZDR measurements are not available, the combination ZH-KDP provides satisfactory results for both Dm and NW if an appropriate a priori assumption on drop shape is made.

  11. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  12. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  13. Mass detection, localization and estimation for wind turbine blades based on statistical pattern recognition

    NASA Astrophysics Data System (ADS)

    Colone, L.; Hovgaard, M. K.; Glavind, L.; Brincker, R.

    2018-07-01

    A method for mass change detection on wind turbine blades using natural frequencies is presented. The approach is based on two statistical tests. The first test decides if there is a significant mass change and the second test is a statistical group classification based on Linear Discriminant Analysis. The frequencies are identified by means of Operational Modal Analysis using natural excitation. Based on the assumption of Gaussianity of the frequencies, a multi-class statistical model is developed by combining finite element model sensitivities in 10 classes of change location on the blade, the smallest area being 1/5 of the span. The method is experimentally validated for a full scale wind turbine blade in a test setup and loaded by natural wind. Mass change from natural causes was imitated with sand bags and the algorithm was observed to perform well with an experimental detection rate of 1, localization rate of 0.88 and mass estimation rate of 0.72.

  14. Estimation of Untracked Geosynchronous Population from Short-Arc Angles-Only Observations

    NASA Technical Reports Server (NTRS)

    Healy, Liam; Matney, Mark

    2017-01-01

    Telescope observations of the geosynchronous regime will observe two basic types of objects --- objects related to geosynchronous earth orbit (GEO) satellites, and objects in highly elliptical geosynchronous transfer orbits (GTO). Because telescopes only measure angular rates, the GTO can occasionally mimic the motion of GEO objects over short arcs. A GEO census based solely on short arc telescope observations may be affected by these ``interlopers''. A census that includes multiple angular rates can get an accurate statistical estimate of the GTO population, and that then can be used to correct the estimate of the geosynchronous earth orbit population.

  15. Validation and application of Acoustic Mapping Velocimetry

    NASA Astrophysics Data System (ADS)

    Baranya, Sandor; Muste, Marian

    2016-04-01

    The goal of this paper is to introduce a novel methodology to estimate bedload transport in rivers based on an improved bedform tracking procedure. The measurement technique combines components and processing protocols from two contemporary nonintrusive instruments: acoustic and image-based. The bedform mapping is conducted with acoustic surveys while the estimation of the velocity of the bedforms is obtained with processing techniques pertaining to image-based velocimetry. The technique is therefore called Acoustic Mapping Velocimetry (AMV). The implementation of this technique produces a whole-field velocity map associated with the multi-directional bedform movement. Based on the calculated two-dimensional bedform migration velocity field, the bedload transport estimation is done using the Exner equation. A proof-of-concept experiment was performed to validate the AMV based bedload estimation in a laboratory flume at IIHR-Hydroscience & Engineering (IIHR). The bedform migration was analysed at three different flow discharges. Repeated bed geometry mapping, using a multiple transducer array (MTA), provided acoustic maps, which were post-processed with a particle image velocimetry (PIV) method. Bedload transport rates were calculated along longitudinal sections using the streamwise components of the bedform velocity vectors and the measured bedform heights. The bulk transport rates were compared with the results from concurrent direct physical samplings and acceptable agreement was found. As a first field implementation of the AMV an attempt was made to estimate bedload transport for a section of the Ohio river in the United States, where bed geometry maps, resulted by repeated multibeam echo sounder (MBES) surveys, served as input data. Cross-sectional distributions of bedload transport rates from the AMV based method were compared with the ones obtained from another non-intrusive technique (due to the lack of direct samplings), ISSDOTv2, developed by the US Army Corps of Engineers. The good agreement between the results from the two different methods is encouraging and suggests further field tests in varying hydro-morphological situations.

  16. Estimation of Rainfall Rates from Passive Microwave Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Sharma, Awdhesh Kumar

    Rainfall rates have been estimated using the passive microwave and visible/infrared remote sensing techniques. Data of September 14, 1978 from the Scanning Multichannel Microwave Radiometer (SMMR) on board SEA SAT-A and the Visible and Infrared Spin Scan Radiometer (VISSR) on board GOES-W (Geostationary Operational Environmental Satellite - West) was obtained and analyzed for rainfall rate retrieval. Microwave brightness temperatures (MBT) are simulated, using the microwave radiative transfer model (MRTM) and atmospheric scattering models. These MBT were computed as a function of rates of rainfall from precipitating clouds which are in a combined phase of ice and water. Microwave extinction due to ice and liquid water are calculated using Mie-theory and Gamma drop size distributions. Microwave absorption due to oxygen and water vapor are based on the schemes given by Rosenkranz, and Barret and Chung. The scattering phase matrix involved in the MRTM is found using Eddington's two stream approximation. The surface effects due to winds and foam are included through the ocean surface emissivity model. Rainfall rates are then inverted from MBT using the optimization technique "Leaps and Bounds" and multiple linear regression leading to a relationship between the rainfall rates and MBT. This relationship has been used to infer the oceanic rainfall rates from SMMR data. The VISSR data has been inverted for the rainfall rates using Griffith's scheme. This scheme provides an independent means of estimating rainfall rates for cross checking SMMR estimates. The inferred rainfall rates from both techniques have been plotted on a world map for comparison. A reasonably good correlation has been obtained between the two estimates.

  17. Performance of the chronic kidney disease-epidemiology study equations for estimating glomerular filtration rate before and after nephrectomy.

    PubMed

    Lane, Brian R; Demirjian, Sevag; Weight, Christopher J; Larson, Benjamin T; Poggio, Emilio D; Campbell, Steven C

    2010-03-01

    Accurate renal function determination before and after nephrectomy is essential for proper prevention and management of chronic kidney disease due to nephron loss and ischemic injury. We compared the estimated glomerular filtration rate using several serum creatinine based formulas against the measured rate based on (125)I-iothalamate clearance to determine which most accurately reflects the rate in this setting. Of 7,611 patients treated at our institution since 1975 the measured glomerular filtration rate was selectively determined before and after nephrectomy in 268 and 157, respectively. Performance of the Cockcroft-Gault, Modification of Diet in Renal Disease Study, re-expressed Modification of Diet in Renal Disease Study and Chronic Kidney Disease-Epidemiology Study equations, each of which estimates the glomerular filtration rate, were determined using serum creatinine, age, gender, weight and body surface area. The performance of serum creatinine, reciprocal serum creatinine and the 4 formulas was compared with the measured rate using Pearson's correlation, Lin's concordance coefficient and residual plots. Median serum creatinine was 1.4 mg/dl and the median measured glomerular filtration rate was 50 ml per minute per 1.73 m(2). The correlation between serum creatinine and the measured rate was poor (-0.66) compared with that of reciprocal serum creatinine (0.78) and the 4 equations (0.82 to 0.86). The Chronic Kidney Disease-Epidemiology Study equation performed with greatest precision and accuracy, and least bias of all equations. Stage 3 or greater chronic kidney disease ((125)I-iothalamate glomerular filtration rate 60 ml per minute per 1.73 m(2) or less) was present in 44% of patients with normal serum creatinine (1.4 mg/dl or less) postoperatively. Such missed diagnoses of chronic kidney disease decreased 42% using the Chronic Kidney Disease-Epidemiology Study equation. Glomerular filtration rate estimation equations outperform serum creatinine and better identify patients with perinephrectomy compromised renal function. The newly developed, serum creatinine based, Chronic Kidney Disease-Epidemiology Study equation has sufficient accuracy to render direct glomerular filtration rate measurement unnecessary before and after nephrectomy for cause in most circumstances. 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. The reliability and validity of flight task workload ratings

    NASA Technical Reports Server (NTRS)

    Childress, M. E.; Hart, S. G.; Bortolussi, M. R.

    1982-01-01

    Twelve instrument-rated general aviation pilots each flew two scenarios in a motion-base simulator. During each flight, the pilots verbally estimated their workload every three minutes. Following each flight, they again estimated workload for each flight segment and also rated their overall workload, perceived performance, and 13 specific factors on a bipolar scale. The results indicate that time (a priori, inflight, or postflight) of eliciting ratings, period to be covered by the ratings (a specific moment in time or a longer period), type of rating scale, and rating method (verbal, written, or other) may be important variables. Overall workload ratings appear to be predicted by different specific scales depending upon the situation, with activity level the best predictor. Perceived performance seems to bear little relationship to observer-rated performance when pilots rate their overall performance and an observer rates specific behaviors. Perceived workload and performance also seem unrelated.

  19. Quantifying the risks and benefits of efavirenz use in HIV-infected women of childbearing age in the United States

    PubMed Central

    Hsu, HE; Rydzak, CE; Cotich, KL; Wang, B; Sax, PE; Losina, E; Freedberg, KA; Goldie, SJ; Lu, Z; Walensky, RP

    2010-01-01

    Objectives We quantified the benefits (life expectancy gains) and harms (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the United States. Methods We used data from the Women’s Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Results Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Conclusions Use of non-efavirenz-based initial antiretroviral therapy in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a tradeoff between these two risks; this study can inform discussions between patients and health care providers. PMID:20561082

  20. Quantifying the risks and benefits of efavirenz use in HIV-infected women of childbearing age in the USA.

    PubMed

    Hsu, H E; Rydzak, C E; Cotich, K L; Wang, B; Sax, P E; Losina, E; Freedberg, K A; Goldie, S J; Lu, Z; Walensky, R P

    2011-02-01

    The aim of the study was to quantify the benefits (life expectancy gains) and risks (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the USA. We used data from the Women's Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy (ART) regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Use of non-efavirenz-based initial ART in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a trade-off between these two risks; this study can inform discussions between patients and health care providers.

  1. A 3-D model analysis of the slowdown and interannual variability in the methane growth rate from 1988 to 1997

    NASA Astrophysics Data System (ADS)

    Wang, James S.; Logan, Jennifer A.; McElroy, Michael B.; Duncan, Bryan N.; Megretskaia, Inna A.; Yantosca, Robert M.

    2004-09-01

    Methane has exhibited significant interannual variability with a slowdown in its growth rate beginning in the 1980s. We use a 3-D chemical transport model accounting for interannually varying emissions, transport, and sinks to analyze trends in CH4 from 1988 to 1997. Variations in CH4 sources were based on meteorological and country-level socioeconomic data. An inverse method was used to optimize the strengths of sources and sinks for a base year, 1994. We present a best-guess budget along with sensitivity tests. The analysis suggests that the sum of emissions from animals, fossil fuels, landfills, and wastewater estimated using Intergovernmental Panel on Climate Change default methodology is too high. Recent bottom-up estimates of the source from rice paddies appear to be too low. Previous top-down estimates of emissions from wetlands may be a factor of 2 higher than bottom-up estimates because of possible overestimates of OH. The model captures the general decrease in the CH4 growth rate observed from 1988 to 1997 and the anomalously low growth rates during 1992-1993. The slowdown in the growth rate is attributed to a combination of slower growth of sources and increases in OH. The economic downturn in the former Soviet Union and Eastern Europe made a significant contribution to the decrease in the growth rate of emissions. The 1992-1993 anomaly can be explained by fluctuations in wetland emissions and OH after the eruption of Mount Pinatubo. The results suggest that the recent slowdown of CH4 may be temporary.

  2. How do marital status, work effort, and wage rates interact?

    PubMed

    Ahituv, Avner; Lerman, Robert I

    2007-08-01

    How marital status interacts with men's earnings is an important analytic and policy issue, especially in the context of debates in the United States over programs that encourage healthy marriage. This paper generates new findings about the earnings-marriage relationship by estimating the linkages among flows into and out of marriage, work effort, and wage rates. The estimates are based on National Longitudinal Survey of Youth panel data, covering 23 years of marital and labor market outcomes, and control for unobserved heterogeneity. We estimate marriage effects on hours worked (our proxy for work effort) and on wage rates for all men and for black and low-skilled men separately. The estimates reveal that entering marriage raises hours worked quickly and substantially but that marriage's effect on wage rates takes place more slowly while men continue in marriage. Together; the stimulus to hours worked and wage rates generates an 18%-19% increase in earnings, with about one-third to one-half of the marriage earnings premium attributable to higher work effort. At the same time, higher wage rates and hours worked encourage men to marry and to stay married. Thus, being married and having high earnings reinforce each other over time.

  3. Concentrations and Potential Health Risks of Metals in Lip Products

    PubMed Central

    Liu, Sa; Rojas-Cheatham, Ann

    2013-01-01

    Background: Metal content in lip products has been an issue of concern. Objectives: We measured lead and eight other metals in a convenience sample of 32 lip products used by young Asian women in Oakland, California, and assessed potential health risks related to estimated intakes of these metals. Methods: We analyzed lip products by inductively coupled plasma optical emission spectrometry and used previous estimates of lip product usage rates to determine daily oral intakes. We derived acceptable daily intakes (ADIs) based on information used to determine public health goals for exposure, and compared ADIs with estimated intakes to assess potential risks. Results: Most of the tested lip products contained high concentrations of titanium and aluminum. All examined products had detectable manganese. Lead was detected in 24 products (75%), with an average concentration of 0.36 ± 0.39 ppm, including one sample with 1.32 ppm. When used at the estimated average daily rate, estimated intakes were > 20% of ADIs derived for aluminum, cadmium, chromium, and manganese. In addition, average daily use of 10 products tested would result in chromium intake exceeding our estimated ADI for chromium. For high rates of product use (above the 95th percentile), the percentages of samples with estimated metal intakes exceeding ADIs were 3% for aluminum, 68% for chromium, and 22% for manganese. Estimated intakes of lead were < 20% of ADIs for average and high use. Conclusions: Cosmetics safety should be assessed not only by the presence of hazardous contents, but also by comparing estimated exposures with health-based standards. In addition to lead, metals such as aluminum, cadmium, chromium, and manganese require further investigation. PMID:23674482

  4. A comparison of foetal and infant mortality in the United States and Canada.

    PubMed

    Ananth, Cande V; Liu, Shiliang; Joseph, K S; Kramer, Michael S

    2009-04-01

    Infant mortality rates are higher in the United States than in Canada. We explored this difference by comparing gestational age distributions and gestational age-specific mortality rates in the two countries. Stillbirth and infant mortality rates were compared for singleton births at >or=22 weeks and newborns weighing>or=500 g in the United States and Canada (1996-2000). Since menstrual-based gestational age appears to misclassify gestational duration and overestimate both preterm and postterm birth rates, and because a clinical estimate of gestation is the only available measure of gestational age in Canada, all comparisons were based on the clinical estimate. Data for California were excluded because they lacked a clinical estimate. Gestational age-specific comparisons were based on the foetuses-at-risk approach. The overall stillbirth rate in the United States (37.9 per 10,000 births) was similar to that in Canada (38.2 per 10,000 births), while the overall infant mortality rate was 23% (95% CI 19-26%) higher (50.8 vs 41.4 per 10,000 births, respectively). The gestational age distribution was left-shifted in the United States relative to Canada; consequently, preterm birth rates were 8.0 and 6.0%, respectively. Stillbirth and early neonatal mortality rates in the United States were lower at term gestation only. However, gestational age-specific late neonatal, post-neonatal and infant mortality rates were higher in the United States at virtually every gestation. The overall stillbirth rates (per 10,000 foetuses at risk) among Blacks and Whites in the United States, and in Canada were 59.6, 35.0 and 38.3, respectively, whereas the corresponding infant mortality rates were 85.6, 49.7 and 42.2, respectively. Differences in gestational age distributions and in gestational age-specific stillbirth and infant mortality in the United States and Canada underscore substantial differences in healthcare services, population health status and health policy between the two neighbouring countries.

  5. U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)

    EPA Science Inventory

    The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...

  6. Revised costs of large truck-and bus-involved crashes.

    DOT National Transportation Integrated Search

    2002-11-01

    This study provides the latest estimates of the costs of highway crashes involving large trucks and buses by severity. Based on the latest data available, the estimated cost of police-reported crashes involving trucks with a gross weight rating of mo...

  7. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

    PubMed

    Liang, Hua; Miao, Hongyu; Wu, Hulin

    2010-03-01

    Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.

  8. Iraq War mortality estimates: a systematic review.

    PubMed

    Tapp, Christine; Burkle, Frederick M; Wilson, Kumanan; Takaro, Tim; Guyatt, Gordon H; Amad, Hani; Mills, Edward J

    2008-03-07

    In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review the studies, reports and counts on Iraqi deaths since the start of the war and assessed their methodological quality and results. We performed a systematic search of 15 electronic databases from inception to January 2008. In addition, we conducted a non-structured search of 3 other databases, reviewed study reference lists and contacted subject matter experts. We included studies that provided estimates of Iraqi deaths based on primary research over a reported period of time since the invasion. We excluded studies that summarized mortality estimates and combined non-fatal injuries and also studies of specific sub-populations, e.g. under-5 mortality. We calculated crude and cause-specific mortality rates attributable to violence and average deaths per day for each study, where not already provided. Thirteen studies met the eligibility criteria. The studies used a wide range of methodologies, varying from sentinel-data collection to population-based surveys. Studies assessed as the highest quality, those using population-based methods, yielded the highest estimates. Average deaths per day ranged from 48 to 759. The cause-specific mortality rates attributable to violence ranged from 0.64 to 10.25 per 1,000 per year. Our review indicates that, despite varying estimates, the mortality burden of the war and its sequelae on Iraq is large. The use of established epidemiological methods is rare. This review illustrates the pressing need to promote sound epidemiologic approaches to determining mortality estimates and to establish guidelines for policy-makers, the media and the public on how to interpret these estimates.

  9. Observer-based sliding mode control of Markov jump systems with random sensor delays and partly unknown transition rates

    NASA Astrophysics Data System (ADS)

    Yao, Deyin; Lu, Renquan; Xu, Yong; Ren, Hongru

    2017-10-01

    In this paper, the sliding mode control problem of Markov jump systems (MJSs) with unmeasured state, partly unknown transition rates and random sensor delays is probed. In the practical engineering control, the exact information of transition rates is hard to obtain and the measurement channel is supposed to subject to random sensor delay. Design a Luenberger observer to estimate the unmeasured system state, and an integral sliding mode surface is constructed to ensure the exponential stability of MJSs. A sliding mode controller based on estimator is proposed to drive the system state onto the sliding mode surface and render the sliding mode dynamics exponentially mean-square stable with H∞ performance index. Finally, simulation results are provided to illustrate the effectiveness of the proposed results.

  10. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  11. Human papillomavirus (HPV) vaccination coverage in young Australian women is higher than previously estimated: independent estimates from a nationally representative mobile phone survey.

    PubMed

    Brotherton, Julia M L; Liu, Bette; Donovan, Basil; Kaldor, John M; Saville, Marion

    2014-01-23

    Accurate estimates of coverage are essential for estimating the population effectiveness of human papillomavirus (HPV) vaccination. Australia has a purpose built National HPV Vaccination Program Register for monitoring coverage, however notification of doses administered to young women in the community during the national catch-up program (2007-2009) was not compulsory. In 2011, we undertook a population-based mobile phone survey of young women to independently estimate HPV vaccination coverage. Randomly generated mobile phone numbers were dialed to recruit women aged 22-30 (age eligible for HPV vaccination) to complete a computer assisted telephone interview. Consent was sought to validate self reported HPV vaccination status against the national register. Coverage rates were calculated based on self report and weighted to the age and state of residence structure of the Australian female population. These were compared with coverage estimates from the register using Australian Bureau of Statistics estimated resident populations as the denominator. Among the 1379 participants, the national estimate for self reported HPV vaccination coverage for doses 1/2/3, respectively, weighted for age and state of residence, was 64/59/53%. This compares with coverage of 55/45/32% and 49/40/28% based on register records, using 2007 and 2011 population data as the denominators respectively. Some significant differences in coverage between the states were identified. 20% (223) of women returned a consent form allowing validation of doses against the register and provider records: among these women 85.6% (538) of self reported doses were confirmed. We confirmed that coverage rates for young women vaccinated in the community (at age 18-26 years) are underestimated by the national register and that under-notification is greater for second and third doses. Using 2011 population estimates, rather than estimates contemporaneous with the program rollout, reduces register-based coverage estimates further because of large population increases due to immigration since the program. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Domestic animal hosts strongly influence human-feeding rates of the Chagas disease vector Triatoma infestans in Argentina.

    PubMed

    Gürtler, Ricardo E; Cecere, María C; Vázquez-Prokopec, Gonzalo M; Ceballos, Leonardo A; Gurevitz, Juan M; Fernández, María Del Pilar; Kitron, Uriel; Cohen, Joel E

    2014-01-01

    The host species composition in a household and their relative availability affect the host-feeding choices of blood-sucking insects and parasite transmission risks. We investigated four hypotheses regarding factors that affect blood-feeding rates, proportion of human-fed bugs (human blood index), and daily human-feeding rates of Triatoma infestans, the main vector of Chagas disease. A cross-sectional survey collected triatomines in human sleeping quarters (domiciles) of 49 of 270 rural houses in northwestern Argentina. We developed an improved way of estimating the human-feeding rate of domestic T. infestans populations. We fitted generalized linear mixed-effects models to a global model with six explanatory variables (chicken blood index, dog blood index, bug stage, numbers of human residents, bug abundance, and maximum temperature during the night preceding bug catch) and three response variables (daily blood-feeding rate, human blood index, and daily human-feeding rate). Coefficients were estimated via multimodel inference with model averaging. Median blood-feeding intervals per late-stage bug were 4.1 days, with large variations among households. The main bloodmeal sources were humans (68%), chickens (22%), and dogs (9%). Blood-feeding rates decreased with increases in the chicken blood index. Both the human blood index and daily human-feeding rate decreased substantially with increasing proportions of chicken- or dog-fed bugs, or the presence of chickens indoors. Improved calculations estimated the mean daily human-feeding rate per late-stage bug at 0.231 (95% confidence interval, 0.157-0.305). Based on the changing availability of chickens in domiciles during spring-summer and the much larger infectivity of dogs compared with humans, we infer that the net effects of chickens in the presence of transmission-competent hosts may be more adequately described by zoopotentiation than by zooprophylaxis. Domestic animals in domiciles profoundly affect the host-feeding choices, human-vector contact rates and parasite transmission predicted by a model based on these estimates.

  13. Curriculum-Based Measurement of Oral Reading: Evaluation of Growth Estimates Derived with Pre-Post Assessment Methods

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Monaghen, Barbara D.; Zopluoglu, Cengiz; Van Norman, Ethan R.

    2013-01-01

    Curriculum-based measurement of oral reading (CBM-R) is used to index the level and rate of student growth across the academic year. The method is frequently used to set student goals and monitor student progress. This study examined the diagnostic accuracy and quality of growth estimates derived from pre-post measurement using CBM-R data. A…

  14. Screening for prostate cancer: estimating the magnitude of overdetection

    PubMed Central

    McGregor, M; Hanley, J A; Boivin, J F; McLean, R G

    1998-01-01

    BACKGROUND: No randomized controlled trial of prostate cancer screening has been reported and none is likely to be completed in the near future. In the absence of direct evidence, the decision to screen must therefore be based on estimates of benefits and risks. The main risk of screening is overdetection--the detection of cancer that, if left untreated, would not cause death. In this study the authors estimate the level of overdetection that might result from annual screening of men aged 50-70. METHODS: The annual rate of lethal screen-detectable cancer (detectable cancer that would prove fatal before age 85 if left untreated) was calculated from the observed prostate cancer mortality rate in Quebec; the annual rate of all cases of screen-detectable prostate cancer was calculated from 2 recent screening studies. RESULTS: The annual rate of lethal screen-detectable prostate cancer was estimated to be 1.3 per 1000 men. The annual rate of all cases of screen-detectable prostate cancer was estimated to be 8.0 per 1000 men. The estimated case-fatality rate among men up to 85 years of age was 16% (1.3/8.0) (sensitivity analysis 13% to 22%). INTERPRETATION: Of every 100 men with screen-detected prostate cancer, only 16 on average (13 to 22) could have their lives extended by surgery, since the prostate cancer would not cause death before age 85 in the remaining 84 (78 to 87). PMID:9861205

  15. Basin-scale estimates of pelagic and coral reef calcification in the Red Sea and Western Indian Ocean.

    PubMed

    Steiner, Zvi; Erez, Jonathan; Shemesh, Aldo; Yam, Ruth; Katz, Amitai; Lazar, Boaz

    2014-11-18

    Basin-scale calcification rates are highly important in assessments of the global oceanic carbon cycle. Traditionally, such estimates were based on rates of sedimentation measured with sediment traps or in deep sea cores. Here we estimated CaCO3 precipitation rates in the surface water of the Red Sea from total alkalinity depletion along their axial flow using the water flux in the straits of Bab el Mandeb. The relative contribution of coral reefs and open sea plankton were calculated by fitting a Rayleigh distillation model to the increase in the strontium to calcium ratio. We estimate the net amount of CaCO3 precipitated in the Red Sea to be 7.3 ± 0.4·10(10) kg·y(-1) of which 80 ± 5% is by pelagic calcareous plankton and 20 ± 5% is by the flourishing coastal coral reefs. This estimate for pelagic calcification rate is up to 40% higher than published sedimentary CaCO3 accumulation rates for the region. The calcification rate of the Gulf of Aden was estimated by the Rayleigh model to be ∼1/2 of the Red Sea, and in the northwestern Indian Ocean, it was smaller than our detection limit. The results of this study suggest that variations of major ions on a basin scale may potentially help in assessing long-term effects of ocean acidification on carbonate deposition by marine organisms.

  16. Basin-scale estimates of pelagic and coral reef calcification in the Red Sea and Western Indian Ocean

    PubMed Central

    Steiner, Zvi; Erez, Jonathan; Shemesh, Aldo; Yam, Ruth; Katz, Amitai; Lazar, Boaz

    2014-01-01

    Basin-scale calcification rates are highly important in assessments of the global oceanic carbon cycle. Traditionally, such estimates were based on rates of sedimentation measured with sediment traps or in deep sea cores. Here we estimated CaCO3 precipitation rates in the surface water of the Red Sea from total alkalinity depletion along their axial flow using the water flux in the straits of Bab el Mandeb. The relative contribution of coral reefs and open sea plankton were calculated by fitting a Rayleigh distillation model to the increase in the strontium to calcium ratio. We estimate the net amount of CaCO3 precipitated in the Red Sea to be 7.3 ± 0.4·1010 kg·y−1 of which 80 ± 5% is by pelagic calcareous plankton and 20 ± 5% is by the flourishing coastal coral reefs. This estimate for pelagic calcification rate is up to 40% higher than published sedimentary CaCO3 accumulation rates for the region. The calcification rate of the Gulf of Aden was estimated by the Rayleigh model to be ∼1/2 of the Red Sea, and in the northwestern Indian Ocean, it was smaller than our detection limit. The results of this study suggest that variations of major ions on a basin scale may potentially help in assessing long-term effects of ocean acidification on carbonate deposition by marine organisms. PMID:25368148

  17. Advantages and limitations of web-based surveys: evidence from a child mental health survey.

    PubMed

    Heiervang, Einar; Goodman, Robert

    2011-01-01

    Web-based surveys may have advantages related to the speed and cost of data collection as well as data quality. However, they may be biased by low and selective participation. We predicted that such biases would distort point-estimates such as average symptom level or prevalence but not patterns of associations with putative risk-factors. A structured psychiatric interview was administered to parents in two successive surveys of child mental health. In 2003, parents were interviewed face-to-face, whereas in 2006 they completed the interview online. In both surveys, interviews were preceded by paper questionnaires covering child and family characteristics. The rate of parents logging onto the web site was comparable to the response rate for face-to-face interviews, but the rate of full response (completing all sections of the interview) was much lower for web-based interviews. Full response was less frequent for non-traditional families, immigrant parents, and less educated parents. Participation bias affected point estimates of psychopathology but had little effect on associations with putative risk factors. The time and cost of full web-based interviews was only a quarter of that for face-to-face interviews. Web-based surveys may be performed faster and at lower cost than more traditional approaches with personal interviews. Selective participation seems a particular threat to point estimates of psychopathology, while patterns of associations are more robust.

  18. Mountain goat abundance and population trends in the Olympic Mountains, northwestern Washington, 2016

    USGS Publications Warehouse

    Jenkins, Kurt J.; Happe, Patricia J.; Beirne, Katherine F.; Baccus, William T.

    2016-11-30

    Executive SummaryWe estimated abundance and trends of non-native mountain goats (Oreamnos americanus) in the Olympic Mountains of northwestern Washington, based on aerial surveys conducted during July 13–24, 2016. The surveys produced the seventh population estimate since the first formal aerial surveys were conducted in 1983. This was the second population estimate since we adjusted survey area boundaries and adopted new estimation procedures in 2011. Before 2011, surveys encompassed all areas free of glacial ice at elevations above 1,520 meters (m), but in 2011 we expanded survey unit boundaries to include suitable mountain goat habitats at elevations between 1,425 and 1,520 m. In 2011, we also began applying a sightability correction model allowing us to estimate undercounting bias associated with aerial surveys and to adjust survey results accordingly. The 2016 surveys were carried out by National Park Service (NPS) personnel in Olympic National Park and by Washington Department of Fish and Wildlife (WDFW) biologists in Olympic National Forest and in the southeastern part of Olympic National Park. We surveyed a total of 59 survey units, comprising 55 percent of the 60,218-hectare survey area. We estimated a mountain goat population of 623 ±43 (standard error, SE). Based on this level of estimation uncertainty, the 95-percent confidence interval ranged from 561 to 741 mountain goats at the time of the survey.We examined the rate of increase of the mountain goat population by comparing the current population estimate to previous estimates from 2004 and 2011. Because aerial survey boundaries changed between 2004 and 2016, we recomputed population estimates for 2011 and 2016 surveys based on the revised survey boundaries as well as the previously defined boundaries so that estimates were directly comparable across years. Additionally, because the Mount Washington survey unit was not surveyed in 2011, we used results from an independent survey of the Mount Washington unit conducted by WDFW biologists in 2012 and combined it with the 2011 survey results to produce a complete survey conducted over 2 years. The revised estimates of mountain goat abundance occurring at elevations above 1,520 m were 230 ±19 (SE) in 2004, 350 ±41 (SE) in 2011, and 584 ±39 (SE) in 2016. The difference between the overall 2016 population estimate (623 ±43 [SE]) and the smaller estimate (584 ±39 [SE]) reflected the number of mountain goats counted in the expanded survey areas added in 2011. Based on comparisons within the standardized survey boundary, the mountain goat population in the Olympic Mountains increased at an average finite rate of 6 percent annually from 2004 to 2011, 11 percent annually from 2011 to 2016, and 8 percent annually over the combined period. We caution that the population may have been underestimated in 2011 because of record heavy snows persisting into the survey season. Therefore, the rate of population increase from 2011 and 2016 may be overestimated. The rate of increase measured over the combined period (2004–16) may be more representative of the recent population growth. We conclude that the abundance of mountain goats has increased for more than a decade, and if the recent average rate of population growth were sustained, the population would increase by 45 percent over the next 5 years.

  19. Using community-based reporting of vital events to monitor child mortality: Lessons from rural Ghana.

    PubMed

    Helleringer, Stephane; Arhinful, Daniel; Abuaku, Benjamin; Humes, Michael; Wilson, Emily; Marsh, Andrew; Clermont, Adrienne; Black, Robert E; Bryce, Jennifer; Amouzou, Agbessi

    2018-01-01

    Reducing neonatal and child mortality is a key component of the health-related sustainable development goal (SDG), but most low and middle income countries lack data to monitor child mortality on an annual basis. We tested a mortality monitoring system based on the continuous recording of pregnancies, births and deaths by trained community-based volunteers (CBV). This project was implemented in 96 clusters located in three districts of the Northern Region of Ghana. Community-based volunteers (CBVs) were selected from these clusters and were trained in recording all pregnancies, births, and deaths among children under 5 in their catchment areas. Data collection lasted from January 2012 through September 2013. All CBVs transmitted tallies of recorded births and deaths to the Ghana Birth and deaths registry each month, except in one of the study districts (approximately 80% reporting). Some events were reported only several months after they had occurred. We assessed the completeness and accuracy of CBV data by comparing them to retrospective full pregnancy histories (FPH) collected during a census of the same clusters conducted in October-December 2013. We conducted all analyses separately by district, as well as for the combined sample of all districts. During the 21-month implementation period, the CBVs reported a total of 2,819 births and 137 under-five deaths. Among the latter, there were 84 infant deaths (55 neonatal deaths and 29 post-neonatal deaths). Comparison of the CBV data with FPH data suggested that CBVs significantly under-estimated child mortality: the estimated under-5 mortality rate according to CBV data was only 2/3 of the rate estimated from FPH data (95% Confidence Interval for the ratio of the two rates = 51.7 to 81.4). The discrepancies between the CBV and FPH estimates of infant and neonatal mortality were more limited, but varied significantly across districts. In northern Ghana, a community-based data collection systems relying on volunteers did not yield accurate estimates of child mortality rates. Additional implementation research is needed to improve the timeliness, completeness and accuracy of such systems. Enhancing pregnancy monitoring, in particular, may be an essential step to improve the measurement of neonatal mortality.

  20. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  1. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  2. Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.

    PubMed

    Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi

    2018-03-15

    Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.

  3. Generation of a new cystatin C-based estimating equation for glomerular filtration rate by use of 7 assays standardized to the international calibrator.

    PubMed

    Grubb, Anders; Horio, Masaru; Hansson, Lars-Olof; Björk, Jonas; Nyman, Ulf; Flodin, Mats; Larsson, Anders; Bökenkamp, Arend; Yasuda, Yoshinari; Blufpand, Hester; Lindström, Veronica; Zegers, Ingrid; Althaus, Harald; Blirup-Jensen, Søren; Itoh, Yoshi; Sjöström, Per; Nordin, Gunnar; Christensson, Anders; Klima, Horst; Sunde, Kathrin; Hjort-Christensen, Per; Armbruster, David; Ferrero, Carlo

    2014-07-01

    Many different cystatin C-based equations exist for estimating glomerular filtration rate. Major reasons for this are the previous lack of an international cystatin C calibrator and the nonequivalence of results from different cystatin C assays. Use of the recently introduced certified reference material, ERM-DA471/IFCC, and further work to achieve high agreement and equivalence of 7 commercially available cystatin C assays allowed a substantial decrease of the CV of the assays, as defined by their performance in an external quality assessment for clinical laboratory investigations. By use of 2 of these assays and a population of 4690 subjects, with large subpopulations of children and Asian and Caucasian adults, with their GFR determined by either renal or plasma inulin clearance or plasma iohexol clearance, we attempted to produce a virtually assay-independent simple cystatin C-based equation for estimation of GFR. We developed a simple cystatin C-based equation for estimation of GFR comprising only 2 variables, cystatin C concentration and age. No terms for race and sex are required for optimal diagnostic performance. The equation, [Formula: see text] is also biologically oriented, with 1 term for the theoretical renal clearance of small molecules and 1 constant for extrarenal clearance of cystatin C. A virtually assay-independent simple cystatin C-based and biologically oriented equation for estimation of GFR, without terms for sex and race, was produced. © 2014 The American Association for Clinical Chemistry.

  4. Assessing fire emissions from tropical savanna and forests of central Brazil

    NASA Technical Reports Server (NTRS)

    Riggan, Philip J.; Brass, James A.; Lockwood, Robert N.

    1993-01-01

    Wildfires in tropical forest and savanna are a strong source of trace gas and particulate emissions to the atmosphere, but estimates of the continental-scale impacts are limited by large uncertainties in the rates of fire occurrence and biomass combustion. Satellite-based remote sensing offers promise for characterizing fire physical properties and impacts on the environment, but currently available sensors saturate over high-radiance targets and provide only indications of regions and times at which fires are extensive and their areal rate of growing as recorded in ash layers. Here we describe an approach combining satellite- and aircraft-based remote sensing with in situ measurements of smoke to estimate emissions from central Brazil. These estimates will improve global accounting of radiation-absorbing gases and particulates that may be contributing to climate change and will provide strategic data for fire management.

  5. Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter

    PubMed Central

    Liu, Xiaozhen; Frey, H. Christopher

    2012-01-01

    A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000

  6. Estimation of body temperature rhythm based on heart activity parameters in daily life.

    PubMed

    Sooyoung Sim; Heenam Yoon; Hosuk Ryou; Kwangsuk Park

    2014-01-01

    Body temperature contains valuable health related information such as circadian rhythm and menstruation cycle. Also, it was discovered from previous studies that body temperature rhythm in daily life is related with sleep disorders and cognitive performances. However, monitoring body temperature with existing devices during daily life is not easy because they are invasive, intrusive, or expensive. Therefore, the technology which can accurately and nonintrusively monitor body temperature is required. In this study, we developed body temperature estimation model based on heart rate and heart rate variability parameters. Although this work was inspired by previous research, we originally identified that the model can be applied to body temperature monitoring in daily life. Also, we could find out that normalized Mean heart rate (nMHR) and frequency domain parameters of heart rate variability showed better performance than other parameters. Although we should validate the model with more number of subjects and consider additional algorithms to decrease the accumulated estimation error, we could verify the usefulness of this approach. Through this study, we expect that we would be able to monitor core body temperature and circadian rhythm from simple heart rate monitor. Then, we can obtain various health related information derived from daily body temperature rhythm.

  7. Estimating tissue-specific discrimination factors and turnover rates of stable isotopes of nitrogen and carbon in the smallnose fanskate Sympterygia bonapartii (Rajidae).

    PubMed

    Galván, D E; Jañez, J; Irigoyen, A J

    2016-08-01

    This study aimed to estimate trophic discrimination factors (TDFs) and metabolic turnover rates of nitrogen and carbon stable isotopes in blood and muscle of the smallnose fanskate Sympterygia bonapartii by feeding six adult individuals, maintained in captivity, with a constant diet for 365 days. TDFs were estimated as the difference between δ(13) C or δ(15) N values of the food and the tissues of S. bonapartii after they had reached equilibrium with their diet. The duration of the experiment was enough to reach the equilibrium condition in blood for both elements (estimated time to reach 95% of turnover: C t95%blood  = 150 days, N t95%blood  = 290 days), whilst turnover rates could not be estimated for muscle because of variation among samples. Estimates of Δ(13) C and Δ(15) N values in blood and muscle using all individuals were Δ(13) Cblood = 1·7‰, Δ(13) Cmuscle = 1·3‰, Δ(15) Nblood = 2·5‰ and Δ(15) Nmuscle = 1·5‰, but there was evidence of differences of c.0·4‰ in the Δ(13) C values between sexes. The present values for TDFs and turnover rates constitute the first evidence for dietary switching in batoids based on long-term controlled feeding experiments. Overall, the results showed that S. bonapartii has relatively low turnover rates and isotopic measurements would not track seasonal movements adequately. The estimated Δ(13) C values in S. bonapartii blood and muscle were similar to previous estimations for elasmobranchs and to generally accepted values in bony fishes (Δ(13) C = 1·5‰). For Δ(15) N, the results were similar to published reports for blood but smaller than reports for muscle and notably smaller than the typical values used to estimate trophic position (Δ(15) N c. 3·4‰). Thus, trophic position estimations for elasmobranchs based on typical Δ(15) N values could lead to underestimates of actual trophic positions. Finally, the evidence of differences in TDFs between sexes reveals a need for more targeted research. © 2016 The Fisheries Society of the British Isles.

  8. Phylogenetic estimates of diversification rate are affected by molecular rate variation.

    PubMed

    Duchêne, D A; Hua, X; Bromham, L

    2017-10-01

    Molecular phylogenies are increasingly being used to investigate the patterns and mechanisms of macroevolution. In particular, node heights in a phylogeny can be used to detect changes in rates of diversification over time. Such analyses rest on the assumption that node heights in a phylogeny represent the timing of diversification events, which in turn rests on the assumption that evolutionary time can be accurately predicted from DNA sequence divergence. But there are many influences on the rate of molecular evolution, which might also influence node heights in molecular phylogenies, and thus affect estimates of diversification rate. In particular, a growing number of studies have revealed an association between the net diversification rate estimated from phylogenies and the rate of molecular evolution. Such an association might, by influencing the relative position of node heights, systematically bias estimates of diversification time. We simulated the evolution of DNA sequences under several scenarios where rates of diversification and molecular evolution vary through time, including models where diversification and molecular evolutionary rates are linked. We show that commonly used methods, including metric-based, likelihood and Bayesian approaches, can have a low power to identify changes in diversification rate when molecular substitution rates vary. Furthermore, the association between the rates of speciation and molecular evolution rate can cause the signature of a slowdown or speedup in speciation rates to be lost or misidentified. These results suggest that the multiple sources of variation in molecular evolutionary rates need to be considered when inferring macroevolutionary processes from phylogenies. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  9. Person centered prediction of survival in population based screening program by an intelligent clinical decision support system.

    PubMed

    Safdari, Reza; Maserat, Elham; Asadzadeh Aghdaei, Hamid; Javan Amoli, Amir Hossein; Mohaghegh Shalmani, Hamid

    2017-01-01

    To survey person centered survival rate in population based screening program by an intelligent clinical decision support system. Colorectal cancer is the most common malignancy and major cause of morbidity and mortality throughout the world. Colorectal cancer is the sixth leading cause of cancer death in Iran. In this survey, we used cosine similarity as data mining technique and intelligent system for estimating survival of at risk groups in the screening plan. In the first step, we determined minimum data set (MDS). MDS was approved by experts and reviewing literatures. In the second step, MDS were coded by python language and matched with cosine similarity formula. Finally, survival rate by percent was illustrated in the user interface of national intelligent system. The national intelligent system was designed in PyCharm environment. Main data elements of intelligent system consist demographic information, age, referral type, risk group, recommendation and survival rate. Minimum data set related to survival comprise of clinical status, past medical history and socio-demographic information. Information of the covered population as a comprehensive database was connected to intelligent system and survival rate estimated for each patient. Mean range of survival of HNPCC patients and FAP patients were respectively 77.7% and 75.1%. Also, the mean range of the survival rate and other calculations have changed with the entry of new patients in the CRC registry by real-time. National intelligent system monitors the entire of risk group and reports survival rates by electronic guidelines and data mining technique and also operates according to the clinical process. This web base software has a critical role in the estimation survival rate in order to health care planning.

  10. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  11. Recharge and groundwater models: An overview

    USGS Publications Warehouse

    Sanford, W.

    2002-01-01

    Recharge is a fundamental component of groundwater systems, and in groundwater-modeling exercises recharge is either measured and specified or estimated during model calibration. The most appropriate way to represent recharge in a groundwater model depends upon both physical factors and study objectives. Where the water table is close to the land surface, as in humid climates or regions with low topographic relief, a constant-head boundary condition is used. Conversely, where the water table is relatively deep, as in drier climates or regions with high relief, a specified-flux boundary condition is used. In most modeling applications, mixed-type conditions are more effective, or a combination of the different types can be used. The relative distribution of recharge can be estimated from water-level data only, but flux observations must be incorporated in order to estimate rates of recharge. Flux measurements are based on either Darcian velocities (e.g., stream base-flow) or seepage velocities (e.g., groundwater age). In order to estimate the effective porosity independently, both types of flux measurements must be available. Recharge is often estimated more efficiently when automated inverse techniques are used. Other important applications are the delineation of areas contributing recharge to wells and the estimation of paleorecharge rates using carbon-14.

  12. A digital clock recovery algorithm based on chromatic dispersion and polarization mode dispersion feedback dual phase detection for coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi

    2018-02-01

    A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.

  13. Rain attenuation studies from radiometric and rain DSD measurements at two tropical locations

    NASA Astrophysics Data System (ADS)

    Halder, Tuhina; Adhikari, Arpita; Maitra, Animesh

    2018-05-01

    Efficient use of satellite communication in tropical regions demands proper characterization of rain attenuation, particularly, in view of the available popular propagation models which are mostly based on temperate climatic data. Thus rain attenuations at frequencies 22.234, 23.834 and 31.4/30 GHz over two tropical locations Kolkata (22.57°N, 88.36°E, India) and Belem (1.45°S, 48.49° W, Brazil), have been estimated for the year 2010 and 2011, respectively. The estimation has been done utilizing ground-based disdrometer observations and radiometric measurements over Earth-space path. The results show that rain attenuation estimations from radiometric data are reliable only at low rain rates (<30 mm/h). However, the rain attenuation estimations from disdrometer measurements show good agreement with the ITU-R model, even at high rain rates (upto100 mm/h). Despite having significant variability in terms of drop size distribution (DSD), the attenuation values calculated from DSD data (disdrometer measurements) at Kolkata and Belem differ a little for the rain rates below 30 mm/h. However, the attenuation values, obtained from radiometric measurements at the two places, show significant deviations ranging from 0.54 dB to 3.2 dB up to a rain rate of 30 mm/h, on account of different rain heights, mean atmospheric temperatures and climatology of the two locations.

  14. Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.

    PubMed

    Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris

    2018-03-01

    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    DOE PAGES

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    2017-07-20

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100–200% for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASCmore » measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18% difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36% for the individual events. The use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122% for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. Furthermore, accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.« less

  16. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100–200% for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASCmore » measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18% difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36% for the individual events. The use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122% for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. Furthermore, accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.« less

  17. Use of short-term breath measures to estimate daily methane production by cattle.

    PubMed

    Velazco, J I; Mayer, D G; Zimmerman, S; Hegarty, R S

    2016-01-01

    Methods to measure enteric methane (CH4) emissions from individual ruminants in their production environment are required to validate emission inventories and verify mitigation claims. Estimates of daily methane production (DMP) based on consolidated short-term emission measurements are developing, but method verification is required. Two cattle experiments were undertaken to test the hypothesis that DMP estimated by averaging multiple short-term breath measures of methane emission rate did not differ from DMP measured in respiration chambers (RC). Short-term emission rates were obtained from a GreenFeed Emissions Monitoring (GEM) unit, which measured emission rate while cattle consumed a dispensed supplement. In experiment 1 (Expt. 1), four non-lactating cattle (LW=518 kg) were adapted for 18 days then measured for six consecutive periods. Each period consisted of 2 days of ad libitum intake and GEM emission measurement followed by 1 day in the RC. A prototype GEM unit releasing water as an attractant (GEM water) was also evaluated in Expt. 1. Experiment 2 (Expt. 2) was a larger study based on similar design with 10 cattle (LW=365 kg), adapted for 21 days and GEM measurement was extended to 3 days in each of the six periods. In Expt. 1, there was no difference in DMP estimated by the GEM unit relative to the RC (209.7 v. 215.1 g CH(4)/day) and no difference between these methods in methane yield (MY, 22.7 v. 23.7 g CH(4)/kg of dry matter intake, DMI). In Expt. 2, the correlation between GEM and RC measures of DMP and MY were assessed using 95% confidence intervals, with no difference in DMP or MY between methods and high correlations between GEM and RC measures for DMP (r=0.85; 215 v. 198 g CH(4)/day SEM=3.0) and for MY (r=0.60; 23.8 v. 22.1 g CH(4)/kg DMI SEM=0.42). When data from both experiments was combined neither DMP nor MY differed between GEM- and RC-based measures (P>0.05). GEM water-based estimates of DMP and MY were lower than RC and GEM (P<0.05). Cattle accessed the GEM water unit with similar frequency to the GEM unit (2.8 v. 3.5 times/day, respectively) but eructation frequency was reduced from 1.31 times/min (GEM) to once every 2.6 min (GEM water). These studies confirm the hypothesis that DMP estimated by averaging multiple short-term breath measures of methane emission rate using GEM does not differ from measures of DMP obtained from RCs. Further, combining many short-term measures of methane production rate during supplement consumption provides an estimate of DMP, which can be usefully applied in estimating MY.

  18. Bias in Estimation of Hippocampal Atrophy using Deformation-Based Morphometry Arises from Asymmetric Global Normalization: An Illustration in ADNI 3 Tesla MRI Data

    PubMed Central

    Yushkevich, Paul A.; Avants, Brian B.; Das, Sandhitsu R.; Pluta, John; Altinay, Murat; Craige, Caryne

    2009-01-01

    Measurement of brain change due to neurodegenerative disease and treatment is one of the fundamental tasks of neuroimaging. Deformation-based morphometry (DBM) has been long recognized as an effective and sensitive tool for estimating the change in the volume of brain regions over time. This paper demonstrates that a straightforward application of DBM to estimate the change in the volume of the hippocampus can result in substantial bias, i.e., an overestimation of the rate of change in hippocampal volume. In ADNI data, this bias is manifested as a non-zero intercept of the regression line fitted to the 6 and 12 month rates of hippocampal atrophy. The bias is further confirmed by applying DBM to repeat scans of subjects acquired on the same day. This bias appears to be the result of asymmetry in the interpolation of baseline and followup images during longitudinal image registration. Correcting this asymmetry leads to bias-free atrophy estimation. PMID:20005963

  19. Estimation of the Thickness and Emulsion Rate of Oil Spilled at Sea Using Hyperspectral Remote Sensing Imagery in the SWIR Domain

    NASA Astrophysics Data System (ADS)

    Sicot, G.; Lennon, M.; Miegebielle, V.; Dubucq, D.

    2015-08-01

    The thickness and the emulsion rate of an oil spill are two key parameters allowing to design a tailored response to an oil discharge. If estimated on per pixel basis at a high spatial resolution, the estimation of the oil thickness allows the volume of pollutant to be estimated, and that volume is needed in order to evaluate the magnitude of the pollution, and to determine the most adapted recovering means to use. The estimation of the spatial distribution of the thicknesses also allows the guidance of the recovering means at sea. The emulsion rate can guide the strategy to adopt in order to deal with an offshore oil spill: efficiency of dispersants is for example not identical on a pure oil or on an emulsion. Moreover, the thickness and emulsion rate allow the amount of the oil that has been discharged to be estimated. It appears that the shape of the reflectance spectrum of oil in the SWIR range (1000-2500nm) varies according to the emulsion rate and to the layer thickness. That shape still varies when the oil layer reaches a few millimetres, which is not the case in the visible range (400-700nm), where the spectral variation saturates around 200 μm (the upper limit of the Bonn agreement oil appearance code). In that context, hyperspectral imagery in the SWIR range shows a high potential to describe and characterize oil spills. Previous methods which intend to estimate those two parameters are based on the use of a spectral library. In that paper, we will present a method based on the inversion of a simple radiative transfer model in the oil layer. We will show that the proposed method is robust against another parameter that affects the reflectance spectrum: the size of water droplets in the emulsion. The method shows relevant results using measurements made in laboratory, equivalent to the ones obtained using methods based on the use of a spectral library. The method has the advantage to release the need of a spectral library, and to provide maps of thickness and emulsion rate values per pixel. The maps obtained are not composed of regions of thickness ranges, such as the ones obtained using discretized levels of measurements in the spectral library, or maps made from visual observations following the Bonn agreement oil appearance code.

  20. Estimation of Blood Flow Rates in Large Microvascular Networks

    PubMed Central

    Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.

    2012-01-01

    Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980

  1. Quantifying periglacial erosion: Insights on a glacial sediment budget, Matanuska Glacier, Alaska

    USGS Publications Warehouse

    O'Farrell, C. R.; Heimsath, A.M.; Lawson, D.E.; Jorgensen, L.M.; Evenson, E.B.; Larson, G.; Denner, J.

    2009-01-01

    Glacial erosion rates are estimated to be among the highest in the world. Few studies have attempted, however, to quantify the flux of sediment from the periglacial landscape to a glacier. Here, erosion rates from the nonglacial landscape above the Matanuska Glacier, Alaska are presented and compare with an 8-yr record of proglacial suspended sediment yield. Non-glacial lowering rates range from 1??8 ?? 0??5 mm yr-1 to 8??5 ?? 3??4 mm yr-1 from estimates of rock fall and debris-flow fan volumes. An average erosion rate of 0??08 ?? 0??04 mm yr-1 from eight convex-up ridge crests was determined using in situ produced cosmogenic 10Be. Extrapolating these rates, based on landscape morphometry, to the Matanuska basin (58% ice-cover), it was found that nonglacial processes account for an annual sediment flux of 2??3 ?? 1??0 ?? 106 t. Suspended sediment data for 8 years and an assumed bedload to estimate the annual sediment yield at the Matanuska terminus to be 2??9 ?? 1??0 ?? 106 t, corresponding to an erosion rate of 1??8 ?? 0??6 mm yr-1: nonglacial sources therefore account for 80 ?? 45% of the proglacial yield. A similar set of analyses were used for a small tributary sub-basin (32% ice-cover) to determine an erosion rate of 12??1 ?? 6??9 mm yr-1, based on proglacial sediment yield, with the nonglacial sediment flux equal to 10 ?? 7% of the proglacial yield. It is suggested that erosion rates by nonglacial processes are similar to inferred subglacial rates, such that the ice-free regions of a glaciated landscape contribute significantly to the glacial sediment budget. The similar magnitude of nonglacial and glacial rates implies that partially glaciated landscapes will respond rapidly to changes in climate and base level through a rapid nonglacial response to glacially driven incision. ?? 2009 John Wiley & Sons, Ltd.

  2. Angular-Rate Estimation Using Delayed Quaternion Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.

  3. Parent ratings of working memory are uniquely related to performance-based measures of secondary memory but not primary memory.

    PubMed

    Ralph, Kathryn J; Gibson, Bradley S; Gondoli, Dawn M

    2018-03-06

    Existing evidence suggests that performance- and rating-based measures of working memory (WM) correlate poorly. Although some researchers have interpreted this evidence as suggesting that these measures may be assessing distinct cognitive constructs, another possibility is that rating-based measures are related to some but not all theoretically motivated performance-based measures. The current study distinguished between performance-based measures of primary memory (PM) and secondary memory (SM), and examined the relation between each of these components of WM and parent-ratings on the WM subscale of the Behavior Rating Inventory of Executive Function (BRIEF-WM). Because SM and BRIEF-WM scores have both been associated with group differences in attention-deficit/hyperactivity disorder (ADHD), it was hypothesized that SM scores would be uniquely related to parent-rated BRIEF-WM scores. Participants were a sample of 77 adolescents with and without an ADHD diagnosis, aged 11 to 15 years, from a midwestern school district. Participant scores on verbal and spatial immediate free recall tasks were used to estimate both PM and SM capacities. Partial correlation analyses were used to evaluate the extent to which estimates of PM and SM were uniquely related parent-rated BRIEF-WM scores. Both verbal and spatial SM scores were significantly related to parent-rated BRIEF-WM scores, when corresponding PM scores were controlled. Higher verbal and spatial SM scores were associated with less frequent parent-report of WM-related failures in their child's everyday life. However, neither verbal nor spatial PM scores significantly related to parent-rated BRIEF-WM scores, when corresponding SM scores were controlled. The current study suggested that previously observed low correlations between performance- and rating-based measures of WM may result from use of performance-based WM measures that do not capture the unique contributions of PM and SM components of WM.

  4. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    PubMed Central

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from <10% during 2003–2008 to ≈70% during 2009–2013. Observed hospitalization rates per 100,000 persons varied by season: 7.3–50.5 for children <18 years of age, 3.0–30.3 for adults 18–64 years, and 13.6–181.8 for adults >65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children <18 years, ≈20% higher for adults 18–64 years, and ≈55% for adults >65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  5. Estimating the incidence of breast cancer in Africa: a systematic review and meta-analysis.

    PubMed

    Adeloye, Davies; Sowunmi, Olaperi Y; Jacobs, Wura; David, Rotimi A; Adeosun, Adeyemi A; Amuta, Ann O; Misra, Sanjay; Gadanya, Muktar; Auta, Asa; Harhay, Michael O; Chan, Kit Yee

    2018-06-01

    Breast cancer is estimated to be the most common cancer worldwide. We sought to assemble publicly available data from Africa to provide estimates of the incidence of breast cancer on the continent. A systematic search of Medline, EMBASE, Global Health and African Journals Online (AJOL) was conducted. We included population- or hospital-based registry studies on breast cancer conducted in Africa, and providing estimates of the crude incidence of breast cancer among women. A random effects meta-analysis was employed to determine the pooled incidence of breast cancer across studies. The literature search returned 4648 records, with 41 studies conducted across 54 study sites in 22 African countries selected. We observed important variations in reported cancer incidence between population- and hospital-based cancer registries. The overall pooled crude incidence of breast cancer from population-based registries was 24.5 per 100 000 person years (95% confidence interval (CI) 20.1-28.9). The incidence in North Africa was higher at 29.3 per 100 000 (95% CI 20.0-38.7) than Sub-Saharan Africa (SSA) at 22.4 per 100 000 (95% CI 17.2-28.0). In hospital-based registries, the overall pooled crude incidence rate was estimated at 23.6 per 100 000 (95% CI 18.5-28.7). SSA and Northern Africa had relatively comparable rates at 24.0 per 100 000 (95% CI 17.5-30.4) and 23.2 per 100 000 (95% CI 6.6-39.7), respectively. Across both registries, incidence rates increased considerably between 2000 and 2015. The available evidence suggests a growing incidence of breast cancer in Africa. The representativeness of these estimates is uncertain due to the paucity of data in several countries and calendar years, as well as inconsistency in data collation and quality across existing cancer registries.

  6. Estimating the incidence of breast cancer in Africa: a systematic review and meta-analysis

    PubMed Central

    Adeloye, Davies; Sowunmi, Olaperi Y.; Jacobs, Wura; David, Rotimi A; Adeosun, Adeyemi A; Amuta, Ann O.; Misra, Sanjay; Gadanya, Muktar; Auta, Asa; Harhay, Michael O; Chan, Kit Yee

    2018-01-01

    Background Breast cancer is estimated to be the most common cancer worldwide. We sought to assemble publicly available data from Africa to provide estimates of the incidence of breast cancer on the continent. Methods A systematic search of Medline, EMBASE, Global Health and African Journals Online (AJOL) was conducted. We included population- or hospital-based registry studies on breast cancer conducted in Africa, and providing estimates of the crude incidence of breast cancer among women. A random effects meta-analysis was employed to determine the pooled incidence of breast cancer across studies. Results The literature search returned 4648 records, with 41 studies conducted across 54 study sites in 22 African countries selected. We observed important variations in reported cancer incidence between population- and hospital-based cancer registries. The overall pooled crude incidence of breast cancer from population-based registries was 24.5 per 100 000 person years (95% confidence interval (CI) 20.1-28.9). The incidence in North Africa was higher at 29.3 per 100 000 (95% CI 20.0-38.7) than Sub-Saharan Africa (SSA) at 22.4 per 100 000 (95% CI 17.2-28.0). In hospital-based registries, the overall pooled crude incidence rate was estimated at 23.6 per 100 000 (95% CI 18.5-28.7). SSA and Northern Africa had relatively comparable rates at 24.0 per 100 000 (95% CI 17.5-30.4) and 23.2 per 100 000 (95% CI 6.6-39.7), respectively. Across both registries, incidence rates increased considerably between 2000 and 2015. Conclusions The available evidence suggests a growing incidence of breast cancer in Africa. The representativeness of these estimates is uncertain due to the paucity of data in several countries and calendar years, as well as inconsistency in data collation and quality across existing cancer registries. PMID:29740502

  7. Mass and volume contributions to twentieth-century global sea level rise.

    PubMed

    Miller, Laury; Douglas, Bruce C

    2004-03-25

    The rate of twentieth-century global sea level rise and its causes are the subjects of intense controversy. Most direct estimates from tide gauges give 1.5-2.0 mm yr(-1), whereas indirect estimates based on the two processes responsible for global sea level rise, namely mass and volume change, fall far below this range. Estimates of the volume increase due to ocean warming give a rate of about 0.5 mm yr(-1) (ref. 8) and the rate due to mass increase, primarily from the melting of continental ice, is thought to be even smaller. Therefore, either the tide gauge estimates are too high, as has been suggested recently, or one (or both) of the mass and volume estimates is too low. Here we present an analysis of sea level measurements at tide gauges combined with observations of temperature and salinity in the Pacific and Atlantic oceans close to the gauges. We find that gauge-determined rates of sea level rise, which encompass both mass and volume changes, are two to three times higher than the rates due to volume change derived from temperature and salinity data. Our analysis supports earlier studies that put the twentieth-century rate in the 1.5-2.0 mm yr(-1) range, but more importantly it suggests that mass increase plays a larger role than ocean warming in twentieth-century global sea level rise.

  8. Angular-Rate Estimation Using Star Tracker Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, I.; Deutschmann, Julie K.; Harman, Richard R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quatemion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quatemion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quatemion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.

  9. Angular-Rate Estimation using Star Tracker Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, Itzhack Y.; Deutschmann, Julie K.; Harman, Richard R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quaternion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.

  10. An analysis of I/O efficient order-statistic-based techniques for noise power estimation in the HRMS sky survey's operational system

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Olsen, E. T.

    1992-01-01

    Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.

  11. Mourning dove hunting regulation strategy based on annual harvest statistics and banding data

    USGS Publications Warehouse

    Otis, D.L.

    2006-01-01

    Although managers should strive to base game bird harvest management strategies on mechanistic population models, monitoring programs required to build and continuously update these models may not be in place. Alternatively, If estimates of total harvest and harvest rates are available, then population estimates derived from these harvest data can serve as the basis for making hunting regulation decisions based on population growth rates derived from these estimates. I present a statistically rigorous approach for regulation decision-making using a hypothesis-testing framework and an assumed framework of 3 hunting regulation alternatives. I illustrate and evaluate the technique with historical data on the mid-continent mallard (Anas platyrhynchos) population. I evaluate the statistical properties of the hypothesis-testing framework using the best available data on mourning doves (Zenaida macroura). I use these results to discuss practical implementation of the technique as an interim harvest strategy for mourning doves until reliable mechanistic population models and associated monitoring programs are developed.

  12. [Comparison of Flu Outbreak Reporting Standards Based on Transmission Dynamics Model].

    PubMed

    Yang, Guo-jing; Yi, Qing-jie; Li, Qin; Zeng, Qing

    2016-05-01

    To compare the current two flu outbreak reporting standards for the purpose of better prevention and control of flu outbreaks. A susceptible-exposed-infectious/asymptomatic-removed (SEIAR) model without interventions was set up first, followed by a model with interventions based on real situation. Simulated interventions were developed based on the two reporting standards, and evaluated by estimated duration of outbreaks, cumulative new cases, cumulative morbidity rates, decline in percentage of morbidity rates, and cumulative secondary cases. The basic reproductive number of the outbreak was estimated as 8. 2. The simulation produced similar results as the real situation. The effect of interventions based on reporting standard one (10 accumulated new cases in a week) was better than that of interventions based on reporting standard two (30 accumulated new cases in a week). The reporting standard one (10 accumulated new cases in a week) is more effective for prevention and control of flu outbreaks.

  13. Comparison of Algorithm-based Estimates of Occupational Diesel Exhaust Exposure to Those of Multiple Independent Raters in a Population-based Case–Control Study

    PubMed Central

    Friesen, Melissa C.

    2013-01-01

    Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the algorithm and the individual raters (κw = 0.58–0.81). For these metrics, the algorithm estimates had consistently higher agreement with the aggregate rating (κw = 0.82) than with the individual raters. For all metrics, the agreement between the algorithm and the aggregate ratings was highest for the unexposed category (90–93%) and was poor to moderate for the exposed categories (9–64%). Lower agreement was observed for jobs with a start year <1965 versus ≥1965. For the confidence metrics, the agreement was poor to moderate among raters (κw = 0.17–0.45) and between the algorithm and the individual raters (κw = 0.24–0.61). CART models identified patterns in the questionnaire responses that predicted a fair-to-moderate (33–89%) proportion of the disagreements between the raters’ and the algorithm estimates. Discussion: The agreement between any two raters was similar to the agreement between an algorithm-based approach and individual raters, providing additional support for using the more efficient and transparent algorithm-based approach. CART models identified some patterns in disagreements between the first rater and the algorithm. Given the absence of a gold standard for estimating exposure, these patterns can be reviewed by a team of exposure assessors to determine whether the algorithm should be revised for future studies. PMID:23184256

  14. Evaluation of underreporting of salmonellosis and shigellosis hospitalised cases in Greece, 2011: results of a capture-recapture study and a hospital registry review

    PubMed Central

    2013-01-01

    Background Salmonellosis and shigellosis are mandatorily notifiable diseases in Greece. Underreporting of both diseases has been postulated but there has not been any national study to quantify it. The objective of this study was to: a) estimate underreporting of hospitalised cases at public Greek hospitals in 2011 with a capture-recapture (C-RC) study, b) evaluate the accuracy of this estimation, c) investigate the possible impact of specific factors on notification rates, and d) estimate community incidence of both diseases. Methods The mandatory notification system database and the database of the National Reference Laboratory for Salmonella and Shigella (NRLSS) were used in the C-RC study. The estimated total number of cases was compared with the actual number found by using the hospital records of the microbiological laboratories. Underreporting was also estimated by patients’ age-group, sex, type of hospital, region and month of notification. Assessment of the community incidence was based on the extrapolation of the hospitalisation rate of the diseases in Europe. Results The estimated underreporting of salmonellosis and shigellosis cases through the C-RC study was 47.7% and 52.0%, respectively. The reporting rate of salmonellosis significantly varied between the thirteen regions of the country from 8.3% to 95.6% (median: 28.4%). Age and sex were not related to the probability of reporting. The notification rate did not significantly differ between urban and rural areas, however, large university hospitals had a higher underreporting rate than district hospitals (p-value < 0.001). The actual underreporting, based on the hospital records review, was close to the estimated via the C-RC study; 52.8% for salmonellosis and 58.4% for shigellosis. The predicted community incidence of salmonellosis ranged from 312 to 936 and of shigellosis from 35 to 104 cases per 100,000 population. Conclusions Underreporting was higher than that reported by other countries and factors associated with underreporting should be further explored. C-RC analysis seems to be a useful tool for the assessment of the underreporting of hospitalised cases. National data on underreporting and under-ascertainment rate are needed for assessing the accuracy of the estimation of the community burden of the diseases. PMID:24060206

  15. Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study.

    PubMed

    Chung, Heewon; Ko, Hoon; Thap, Tharoeun; Jeong, Changwon; Noh, Se-Eung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone's built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone's built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient's HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis.

  16. Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study

    PubMed Central

    Chung, Heewon; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone’s built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone’s built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient’s HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis. PMID:27551969

  17. Address-based versus random-digit-dial surveys: comparison of key health and risk indicators.

    PubMed

    Link, Michael W; Battaglia, Michael P; Frankel, Martin R; Osborn, Larry; Mokdad, Ali H

    2006-11-15

    Use of random-digit dialing (RDD) for conducting health surveys is increasingly problematic because of declining participation rates and eroding frame coverage. Alternative survey modes and sampling frames may improve response rates and increase the validity of survey estimates. In a 2005 pilot study conducted in six states as part of the Behavioral Risk Factor Surveillance System, the authors administered a mail survey to selected household members sampled from addresses in a US Postal Service database. The authors compared estimates based on data from the completed mail surveys (n = 3,010) with those from the Behavioral Risk Factor Surveillance System telephone surveys (n = 18,780). The mail survey data appeared reasonably complete, and estimates based on data from the two survey modes were largely equivalent. Differences found, such as differences in the estimated prevalences of binge drinking (mail = 20.3%, telephone = 13.1%) or behaviors linked to human immunodeficiency virus transmission (mail = 7.1%, telephone = 4.2%), were consistent with previous research showing that, for questions about sensitive behaviors, self-administered surveys generally produce higher estimates than interviewer-administered surveys. The mail survey also provided access to cell-phone-only households and households without telephones, which cannot be reached by means of standard RDD surveys.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Min; Kollias, Pavlos; Feng, Zhe

    The motivation for this research is to develop a precipitation classification and rain rate estimation method using cloud radar-only measurements for Atmospheric Radiation Measurement (ARM) long-term cloud observation analysis, which are crucial and unique for studying cloud lifecycle and precipitation features under different weather and climate regimes. Based on simultaneous and collocated observations of the Ka-band ARM zenith radar (KAZR), two precipitation radars (NCAR S-PolKa and Texas A&M University SMART-R), and surface precipitation during the DYNAMO/AMIE field campaign, a new cloud radar-only based precipitation classification and rain rate estimation method has been developed and evaluated. The resulting precipitation classification ismore » equivalent to those collocated SMART-R and S-PolKa observations. Both cloud and precipitation radars detected about 5% precipitation occurrence during this period. The convective (stratiform) precipitation fraction is about 18% (82%). The 2-day collocated disdrometer observations show an increased number concentration of large raindrops in convective rain compared to dominant concentration of small raindrops in stratiform rain. The composite distributions of KAZR reflectivity and Doppler velocity also show two distinct structures for convective and stratiform rain. These indicate that the method produces physically consistent results for two types of rain. The cloud radar-only rainfall estimation is developed based on the gradient of accumulative radar reflectivity below 1 km, near-surface Ze, and collocated surface rainfall (R) measurement. The parameterization is compared with the Z-R exponential relation. The relative difference between estimated and surface measured rainfall rate shows that the two-parameter relation can improve rainfall estimation.« less

  19. [Proposed method to estimate underreporting of induced abortion in Spain].

    PubMed

    Rodríguez Blas, C; Sendra Gutiérrez, J M; Regidor Poyatos, E; Gutiérrez Fisac, J L; Iñigo Martínez, J

    1994-01-01

    In Spain, from 1987 to 1990 the rate of legal abortion reported to the health authorities has doubled; nevertheless, the observed geographical differences suggest to an underreporting of the number of voluntary pregnancy terminations. Based on information on several sociodemographic, economic and cultural characteristics, contraceptive use, availability of abortion services, fertility indices, and maternal and child health status, five homogenEous groups of autonomous region were identified applying factor and cluster analysis techniques. To estimate the level of underreporting, we assumed that all the regions which shape a cluster ought to have the same abortion rate that the region with the highest rate in each group. We estimate that about 18,463 abortions (33.2%) were not reported during 1990. The proposed method can be used for assessing the notification since it allows to identify geographical areas where very similar rates of legal abortion are expected.

  20. Cost savings from reduced catheter-related bloodstream infection after simulation-based education for residents in a medical intensive care unit.

    PubMed

    Cohen, Elaine R; Feinglass, Joe; Barsuk, Jeffrey H; Barnard, Cynthia; O'Donnell, Anna; McGaghie, William C; Wayne, Diane B

    2010-04-01

    Interventions to reduce preventable complications such as catheter-related bloodstream infections (CRBSI) can also decrease hospital costs. However, little is known about the cost-effectiveness of simulation-based education. The aim of this study was to estimate hospital cost savings related to a reduction in CRBSI after simulation training for residents. This was an intervention evaluation study estimating cost savings related to a simulation-based intervention in central venous catheter (CVC) insertion in the Medical Intensive Care Unit (MICU) at an urban teaching hospital. After residents completed a simulation-based mastery learning program in CVC insertion, CRBSI rates declined sharply. Case-control and regression analysis methods were used to estimate savings by comparing CRBSI rates in the year before and after the intervention. Annual savings from reduced CRBSIs were compared with the annual cost of simulation training. Approximately 9.95 CRBSIs were prevented among MICU patients with CVCs in the year after the intervention. Incremental costs attributed to each CRBSI were approximately $82,000 in 2008 dollars and 14 additional hospital days (including 12 MICU days). The annual cost of the simulation-based education was approximately $112,000. Net annual savings were thus greater than $700,000, a 7 to 1 rate of return on the simulation training intervention. A simulation-based educational intervention in CVC insertion was highly cost-effective. These results suggest that investment in simulation training can produce significant medical care cost savings.

  1. Cuff-Free Blood Pressure Estimation Using Pulse Transit Time and Heart Rate.

    PubMed

    Wang, Ruiping; Jia, Wenyan; Mao, Zhi-Hong; Sclabassi, Robert J; Sun, Mingui

    2014-10-01

    It has been reported that the pulse transit time (PTT), the interval between the peak of the R-wave in electrocardiogram (ECG) and the fingertip photoplethysmogram (PPG), is related to arterial stiffness, and can be used to estimate the systolic blood pressure (SBP) and diastolic blood pressure (DBP). This phenomenon has been used as the basis to design portable systems for continuously cuff-less blood pressure measurement, benefiting numerous people with heart conditions. However, the PTT-based blood pressure estimation may not be sufficiently accurate because the regulation of blood pressure within the human body is a complex, multivariate physiological process. Considering the negative feedback mechanism in the blood pressure control, we introduce the heart rate (HR) and the blood pressure estimate in the previous step to obtain the current estimate. We validate this method using a clinical database. Our results show that the PTT, HR and previous estimate reduce the estimated error significantly when compared to the conventional PTT estimation approach (p<0.05).

  2. Effect of time discretization of the imaging process on the accuracy of trajectory estimation in fluorescence microscopy

    PubMed Central

    Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.

    2014-01-01

    In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248

  3. Using dynamic flux chambers to estimate the natural attenuation rates in the subsurface at petroleum contaminated sites.

    PubMed

    Verginelli, Iason; Pecoraro, Roberto; Baciocchi, Renato

    2018-04-01

    In this work, we introduce a screening method for the evaluation of the natural attenuation rates in the subsurface at sites contaminated by petroleum hydrocarbons. The method is based on the combination of the data obtained from standard source characterization with dynamic flux chambers measurements. The natural attenuation rates are calculated as difference between the flux of contaminants estimated with a non-reactive diffusive model starting from the concentrations of the contaminants detected in the source (soil and/or groundwater) and the effective emission rate of the contaminants measured using dynamic flux chambers installed at ground level. The reliability of this approach was tested in a contaminated site characterized by the presence of BTEX in soil and groundwater. Namely, the BTEX emission rates from the subsurface were measured in 4 seasonal campaigns using dynamic flux chambers installed in 14 sampling points. The comparison of measured fluxes with those predicted using a non-reactive diffusive model, starting from the source concentrations, showed that, in line with other recent studies, the modelling approach can overestimate the expected outdoor concentration of petroleum hydrocarbons even up to 4 orders of magnitude. On the other hand, by coupling the measured data with the fluxes estimated with the diffusive non-reactive model, it was possible to perform a mass balance to evaluate the natural attenuation loss rates of petroleum hydrocarbons during the migration from the source to ground level. Based on this comparison, the estimated BTEX loss rates in the test site were up to almost 0.5kg/year/m 2 . These rates are in line with the values reported in the recent literature for natural source zone depletion. In short, the method presented in this work can represent an easy-to-use and cost-effective option that can provide a further line of evidence of natural attenuation rates expected at contaminated sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Modeling the Declining Positivity Rates for Human Immunodeficiency Virus Testing in New York State.

    PubMed

    Martin, Erika G; MacDonald, Roderick H; Smith, Lou C; Gordon, Daniel E; Lu, Tao; OʼConnell, Daniel A

    2015-01-01

    New York health care providers have experienced declining percentages of positive human immunodeficiency virus (HIV) tests among patients. Furthermore, observed positivity rates are lower than expected on the basis of the national estimate that one-fifth of HIV-infected residents are unaware of their infection. We used mathematical modeling to evaluate whether this decline could be a result of declining numbers of HIV-infected persons who are unaware of their infection, a measure that is impossible to measure directly. A stock-and-flow mathematical model of HIV incidence, testing, and diagnosis was developed. The model includes stocks for uninfected, infected and unaware (in 4 disease stages), and diagnosed individuals. Inputs came from published literature and time series (2006-2009) for estimated new infections, newly diagnosed HIV cases, living diagnosed cases, mortality, and diagnosis rates in New York. Primary model outcomes were the percentage of HIV-infected persons unaware of their infection and the percentage of HIV tests with a positive result (HIV positivity rate). In the base case, the estimated percentage of unaware HIV-infected persons declined from 14.2% in 2006 (range, 11.9%-16.5%) to 11.8% in 2010 (range, 9.9%-13.1%). The HIV positivity rate, assuming testing occurred independent of risk, was 0.12% in 2006 (range, 0.11%-0.15%) and 0.11% in 2010 (range, 0.10%-0.13%). The observed HIV positivity rate was more than 4 times the expected positivity rate based on the model. HIV test positivity is a readily available indicator, but it cannot distinguish causes of underlying changes. Findings suggest that the percentage of unaware HIV-infected New Yorkers is lower than the national estimate and that the observed HIV test positivity rate is greater than expected if infected and uninfected individuals tested at the same rate, indicating that testing efforts are appropriately targeting undiagnosed cases.

  5. Shoreline development and degradation of coastal fish reproduction habitats.

    PubMed

    Sundblad, Göran; Bergström, Ulf

    2014-12-01

    Coastal development has severely affected habitats and biodiversity during the last century, but quantitative estimates of the impacts are usually lacking. We utilize predictive habitat modeling and mapping of human pressures to estimate the cumulative long-term effects of coastal development in relation to fish habitats. Based on aerial photographs since the 1960s, shoreline development rates were estimated in the Stockholm archipelago in the Baltic Sea. By combining shoreline development rates with spatial predictions of fish reproduction habitats, we estimated annual habitat degradation rates for three of the most common coastal fish species, northern pike (Esox lucius), Eurasian perch (Perca fluviatilis) and roach (Rutilus rutilus). The results showed that shoreline constructions were concentrated to the reproduction habitats of these species. The estimated degradation rates, where a degraded habitat was defined as having ≥3 constructions per 100 m shoreline, were on average 0.5 % of available habitats per year and about 1 % in areas close to larger population centers. Approximately 40 % of available habitats were already degraded in 2005. These results provide an example of how many small construction projects over time may have a vast impact on coastal fish populations.

  6. A novel case-control design to estimate the extent of over-diagnosis of breast cancer due to organised population-based mammography screening.

    PubMed

    Beckmann, Kerri R; Lynch, John W; Hiller, Janet E; Farshid, Gelareh; Houssami, Nehmat; Duffy, Stephen W; Roder, David M

    2015-03-15

    Debate about the extent of breast cancer over-diagnosis due to mammography screening has continued for over a decade, without consensus. Estimates range from 0 to 54%, but many studies have been criticized for having flawed methodology. In this study we used a novel study design to estimate over-diagnosis due to organised mammography screening in South Australia (SA). To estimate breast cancer incidence at and following screening we used a population-based, age-matched case-control design involving 4,931 breast cancer cases and 22,914 controls to obtain OR for yearly time intervals since women's last screening mammogram. The level of over-diagnosis was estimated by comparing the cumulative breast cancer incidence with and without screening. The former was derived by applying ORs for each time window to incidence rates in the absence of screening, and the latter, by projecting pre-screening incidence rates. Sensitivity analyses were undertaken to assess potential biases. Over-diagnosis was estimated to be 8% (95%CI 2-14%) and 14% (95%CI 8-19%) among SA women aged 45 to 85 years from 2006-2010, for invasive breast cancer and all breast cancer respectively. These estimates were robust when applying various sensitivity analyses, except for adjustment for potential confounding assuming higher risk among screened than non-screened women, which reduced levels of over-diagnosis to 1% (95%CI 5-7%) and 8% (95%CI 2-14%) respectively when incidence rates for screening participants were adjusted by 10%. Our results indicate that the level of over-diagnosis due to mammography screening is modest and considerably lower than many previous estimates, including others for Australia. © 2014 UICC.

  7. Estimation of Pre-industrial Nitrous Oxide Emission from the Terrestrial Biosphere

    NASA Astrophysics Data System (ADS)

    Xu, R.; Tian, H.; Lu, C.; Zhang, B.; Pan, S.; Yang, J.

    2015-12-01

    Nitrous oxide (N2O) is currently the third most important greenhouse gases (GHG) after methane (CH4) and carbon dioxide (CO2). Global N2O emission increased substantially primarily due to reactive nitrogen (N) enrichment through fossil fuel combustion, fertilizer production, and legume crop cultivation etc. In order to understand how climate system is perturbed by anthropogenic N2O emissions from the terrestrial biosphere, it is necessary to better estimate the pre-industrial N2O emissions. Previous estimations of natural N2O emissions from the terrestrial biosphere range from 3.3-9.0 Tg N2O-N yr-1. This large uncertainty in the estimation of pre-industrial N2O emissions from the terrestrial biosphere may be caused by uncertainty associated with key parameters such as maximum nitrification and denitrification rates, half-saturation coefficients of soil ammonium and nitrate, N fixation rate, and maximum N uptake rate. In addition to the large estimation range, previous studies did not provide an estimate on preindustrial N2O emissions at regional and biome levels. In this study, we applied a process-based coupled biogeochemical model to estimate the magnitude and spatial patterns of pre-industrial N2O fluxes at biome and continental scales as driven by multiple input data, including pre-industrial climate data, atmospheric CO2 concentration, N deposition, N fixation, and land cover types and distributions. Uncertainty associated with key parameters is also evaluated. Finally, we generate sector-based estimates of pre-industrial N2O emission, which provides a reference for assessing the climate forcing of anthropogenic N2O emission from the land biosphere.

  8. COMPARISON OF IN VIVO DERIVED AND SCALED IN VITRO METABOLIC RATE CONSTANTS FOR SOME VOLATILE ORGANIC COMPOUNDS (VOCS)

    EPA Science Inventory

    The reliability of physiologically based pharmacokinetic (PBPK) models is directly related to the accuracy of the metabolic rate parameters used as model inputs. When metabolic rate parameters derived from in vivo experiments are unavailable, they can be estimated from in vitro d...

  9. A nonlinear estimator for reconstructing the angular velocity of a spacecraft without rate gyros

    NASA Technical Reports Server (NTRS)

    Polites, M. E.; Lightsey, W. D.

    1991-01-01

    A scheme for estimating the angular velocity of a spacecraft without rate gyros is presented. It is based upon a nonlinear estimator whose inputs are measured inertial vectors and their calculated time derivatives relative to vehicle axes. It works for all spacecraft attitudes and requires no knowledge of attitude. It can use measurements from a variety of onboard sensors like Sun sensors, star trackers, or magnetometers, and in concert. It can also use look angle measurements from onboard tracking antennas for tracking and data relay satellites or global positioning system satellites. In this paper, it is applied to a Sun point scheme on the Hubble Space Telescope assuming all or most of its onboard rate gyros have failed. Simulation results are presented for verification.

  10. Estimation of homogeneous nucleation flux via a kinetic model

    NASA Technical Reports Server (NTRS)

    Wilcox, C. F.; Bauer, S. H.

    1991-01-01

    The proposed kinetic model for condensation under homogeneous conditions, and the onset of unidirectional cluster growth in supersaturated gases, does not suffer from the conceptual flaws that characterize classical nucleation theory. When a full set of simultaneous rate equation is solved, a characteristic time emerges, for each cluster size, at which the production rate, and its rate of conversion to the next size (n + 1) are equal. Procedures for estimating the essential parameters are proposed; condensation fluxes J(kin) exp ss are evaluated. Since there are practical limits to the cluster size that can be incorporated in the set of simultaneous first-order differential equations, a code was developed for computing an approximate J(th) exp ss based on estimates of a 'constrained equilibrium' distribution, and identification of its minimum.

  11. Are Plant Species Able to Keep Pace with the Rapidly Changing Climate?

    PubMed Central

    Cunze, Sarah; Heydel, Felix; Tackenberg, Oliver

    2013-01-01

    Future climate change is predicted to advance faster than the postglacial warming. Migration may therefore become a key driver for future development of biodiversity and ecosystem functioning. For 140 European plant species we computed past range shifts since the last glacial maximum and future range shifts for a variety of Intergovernmental Panel on Climate Change (IPCC) scenarios and global circulation models (GCMs). Range shift rates were estimated by means of species distribution modelling (SDM). With process-based seed dispersal models we estimated species-specific migration rates for 27 dispersal modes addressing dispersal by wind (anemochory) for different wind conditions, as well as dispersal by mammals (dispersal on animal's coat – epizoochory and dispersal by animals after feeding and digestion – endozoochory) considering different animal species. Our process-based modelled migration rates generally exceeded the postglacial range shift rates indicating that the process-based models we used are capable of predicting migration rates that are in accordance with realized past migration. For most of the considered species, the modelled migration rates were considerably lower than the expected future climate change induced range shift rates. This implies that most plant species will not entirely be able to follow future climate-change-induced range shifts due to dispersal limitation. Animals with large day- and home-ranges are highly important for achieving high migration rates for many plant species, whereas anemochory is relevant for only few species. PMID:23894290

  12. Impact of stillbirths on international comparisons of preterm birth rates: a secondary analysis of the WHO multi-country survey of Maternal and Newborn Health.

    PubMed

    Morisaki, N; Ganchimeg, T; Vogel, J P; Zeitlin, J; Cecatti, J G; Souza, J P; Pileggi Castro, C; Torloni, M R; Ota, E; Mori, R; Dolan, S M; Tough, S; Mittal, S; Bataglia, V; Yadamsuren, B; Kramer, M S

    2017-08-01

    To evaluate the extent to which stillbirths affect international comparisons of preterm birth rates in low- and middle-income countries. Secondary analysis of a multi-country cross-sectional study. 29 countries participating in the World Health Organization Multicountry Survey on Maternal and Newborn Health. 258 215 singleton deliveries in 286 hospitals. We describe how inclusion or exclusion of stillbirth affect rates of preterm births in 29 countries. Preterm delivery. In all countries, preterm birth rates were substantially lower when based on live births only, than when based on total births. However, the increase in preterm birth rates with inclusion of stillbirths was substantially higher in low Human Development Index (HDI) countries [median 18.2%, interquartile range (17.2-34.6%)] compared with medium (4.3%, 3.0-6.7%), and high-HDI countries (4.8%, 4.4-5.5%). Inclusion of stillbirths leads to higher estimates of preterm birth rate in all countries, with a disproportionately large effect in low-HDI countries. Preterm birth rates based on live births alone do not accurately reflect international disparities in perinatal health; thus improved registration and reporting of stillbirths are necessary. Inclusion of stillbirths increases preterm birth rates estimates, especially in low-HDI countries. © 2017 World Health Organization, licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  13. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    PubMed Central

    von Lindern, Ian; Spalinger, Susan; Stifelman, Marc L.; Stanek, Lindsay Wichers; Bartrem, Casey

    2016-01-01

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/dust ingestion rates are 86–94 mg/day for 6-month- to 2-year-old children and 51–67 mg/day for 2- to 9-year-old children. Conclusions: Soil/dust ingestion rate estimates for 1- to 9-year-old children at the BHSS are lower than those commonly used in human health risk assessment. A substantial component of children’s exposure comes from sources beyond the immediate home environment. Citation: von Lindern I, Spalinger S, Stifelman ML, Stanek LW, Bartrem C. 2016. Estimating children’s soil/dust ingestion rates through retrospective analyses of blood lead biomonitoring from the Bunker Hill Superfund Site in Idaho. Environ Health Perspect 124:1462–1470; http://dx.doi.org/10.1289/ehp.1510144 PMID:26745545

  14. Completeness and underestimation of cancer mortality rate in Iran: a report from Fars Province in southern Iran.

    PubMed

    Marzban, Maryam; Haghdoost, Ali-Akbar; Dortaj, Eshagh; Bahrampour, Abbas; Zendehdel, Kazem

    2015-03-01

    The incidence and mortality rates of cancer are increasing worldwide, particularly in the developing countries. Valid data are needed for measuring the cancer burden and making appropriate decisions toward cancer control. We evaluated the completeness of death registry with regard to cancer death in Fars Province, I. R. of Iran. We used data from three sources in Fars Province, including the national death registry (source 1), the follow-up data from the pathology-based cancer registry (source 2) and hospital based records (source 3) during 2004 - 2006. We used the capture-recapture method and estimated underestimation and the true age standardized mortality rate (ASMR) for cancer. We used log-linear (LL) modeling for statistical analysis. We observed 1941, 480, and 355 cancer deaths in sources 1, 2 and 3, respectively. After data linkage, we estimated that mortality registry had about 40% underestimation for cancer death. After adjustment for this underestimation rate, the ASMR of cancer in the Fars Province for all cancer types increased from 44.8 per 100,000 (95% CI: 42.8 - 46.7) to 76.3 per 100,000 (95% CI: 73.3 - 78.9), accounting for 3309 (95% CI: 3151 - 3293) cancer deaths annually. The mortality rate of cancer is considerably higher than the rates reported by the routine registry in Iran. Improvement in the validity and completeness of the mortality registry is needed to estimate the true mortality rate caused by cancer in Iran.

  15. Population ecology of the mallard VIII: Winter distribution patterns and survival rates of winter-banded mallards

    USGS Publications Warehouse

    Nichols, James D.; Hines, James E.

    1987-01-01

    In the present report we address questions about winter distribution patterns and survival rates of North American mallards Anas platyrhynchos. Inferences are based on analyses of banding and recovery data from both winter and preseason banding period. The primary wintering range of the mallard was dividded into 45 minor reference areas and 15 major reference areas which were used to summarize winter banding data. Descriptive tables and figures on the recovery distributions of winter-banded mallards are presented. Using winter recoveries of preseason-banded mallards, we found apparent differences between recovery distribution of young versus adult birds from the same breeding ground reference areas. However, we found no sex-specific differences in winter recovery distribution patterns. Winter recovery distributions of preseason-banded birds also provided evidence that mallards exhibited some degree of year-to-year variation in wintering ground location. The age- and sex-specificity of such variation was tested using winter recoveries of winter-banded birds, and results indicated that subadult (first year) birds were less likely to return to the same wintering grounds the following year than adults. Winter recovery distributions of preseason-banded mallards during 1950-58 differed from distributions in 1966-76. These differences could have resulted from either true distributional shifts or geographic changes in hunting pressure. Survival and recovery rates were estimated from winter banding data. We found no evidence of differences in survival or recovery rates between subadult and adult mallards. Thus, the substantial difference between survival rates of preseason-banded young and adult mallards must result almost entirely from higher mortality of young birds during the approximate period, August-January. Male mallards showed higher survival than females, corroborating inferences based on preseason data. Tests with winter banding and band recovery data indicated some degree of year-to-year variation in both survival and recovery rates, a result again consistent with inference from preseason data. Some evidence indication geographic variation in survival rates; however, there were no consistent directional differences between survival rates of mallards from adjacent northern versus southern areas, or eastern versus western areas. In some comparisons, Central Flyway mallards exhibited slightly higher survival rates than mallards from other flyways. Weighted mean estimates of continental survival rates were computed for the period 1960-77 from both winter banding data and preseason banding of adults. Resulting estimates differed significantly for males, but not for females, and the magnitude of the difference between point estimates was relatively small, even for males. The direction of the difference between these estimates was predicted correctly from previous work on the effects of heterogeneous survival an d recovery rates on band recovery model estimates. The similarity of survival estimates from these two independent data sets supports the believe that biases in these estimates are relatively small.

  16. Evaluation of the return rate of volunteer blood donors

    PubMed Central

    Lourençon, Adriana de Fátima; Almeida, Rodrigo Guimarães dos Santos; Ferreira, Oranice; Martinez, Edson Zangiacomi

    2011-01-01

    Background To convert first-time blood donors into regular volunteer donors is a challenge to transfusion services. Objectives This study aims to estimate the return rate of first time donors of the Ribeirão Preto Blood Center and of other blood centers in its coverage region. Methods The histories of 115,553 volunteer donors between 1996 and 2005 were analyzed. Statistical analysis was based on a parametric long-term survival model that allows an estimation of the proportion of donors who never return for further donations. Results Only 40% of individuals return within one year after the first donation and 53% return within two years. It is estimated that 30% never return to donate. Higher return rates were observed among Black donors. No significant difference was found in non-return rates regarding gender, blood type, Rh blood group and blood collection unit. Conclusions The low percentage of first-time donors who return for further blood donation reinforces the need for marketing actions and strategies aimed at increasing the return rates. PMID:23049294

  17. Estimation of sulphur dioxide emission rate from a power plant based on the remote sensing measurement with an imaging-DOAS instrument

    NASA Astrophysics Data System (ADS)

    Chong, Jihyo; Kim, Young J.; Baek, Jongho; Lee, Hanlim

    2016-10-01

    Major anthropogenic sources of sulphur dioxide in the troposphere include point sources such as power plants and combustion-derived industrial sources. Spatially resolved remote sensing of atmospheric trace gases is desirable for better estimation and validation of emission from those sources. It has been reported that Imaging Differential Optical Absorption Spectroscopy (I-DOAS) technique can provide the spatially resolved two-dimensional distribution measurement of atmospheric trace gases. This study presents the results of I-DOAS observations of SO2 from a large power plant. The stack plume from the Taean coal-fired power plant was remotely sensed with an I-DOAS instrument. The slant column density (SCD) of SO2 was derived by data analysis of the absorption spectra of the scattered sunlight measured by an I-DOAS over the power plant stacks. Two-dimensional distribution of SO2 SCD was obtained over the viewing window of the I-DOAS instrument. The measured SCDs were converted to mixing ratios in order to estimate the rate of SO2 emission from each stack. The maximum mixing ratio of SO2 was measured to be 28.1 ppm with a SCD value of 4.15×1017 molecules/cm2. Based on the exit velocity of the plume from the stack, the emission rate of SO2 was estimated to be 22.54 g/s. Remote sensing of SO2 with an I-DOAS instrument can be very useful for independent estimation and validation of the emission rates from major point sources as well as area sources.

  18. Time to clinical response: an outcome of antibiotic therapy of febrile neutropenia with implications for quality and cost of care.

    PubMed

    Elting, L S; Rubenstein, E B; Rolston, K; Cantor, S B; Martin, C G; Kurtin, D; Rodriguez, S; Lam, T; Kanesan, K; Bodey, G

    2000-11-01

    To determine whether antibiotic regimens with similar rates of response differ significantly in the speed of response and to estimate the impact of this difference on the cost of febrile neutropenia. The time point of clinical response was defined by comparing the sensitivity, specificity, and predictive values of alternative objective and subjective definitions. Data from 488 episodes of febrile neutropenia, treated with either of two commonly used antibiotics (coded A or B) during six clinical trials, were pooled to compare the median time to clinical response, days of antibiotic therapy and hospitalization, and estimated costs. Response rates were similar; however, the median time to clinical response was significantly shorter with A-based regimens (5 days) compared with B-based regimens (7 days; P =.003). After 72 hours of therapy, 33% of patients who received A but only 18% of those who received B had responded (P =.01). These differences resulted in fewer days of antibiotic therapy and hospitalization with A-based regimens (7 and 9 days) compared with B-based regimens (9 and 12 days, respectively; P <.04) and in significantly lower estimated median costs ($8,491 v $11,133 per episode; P =.03). Early discharge at the time of clinical response should reduce the median cost from $10,752 to $8,162 (P <.001). Despite virtually identical rates of response, time to clinical response and estimated cost of care varied significantly among regimens. An early discharge strategy based on our definition of the time point of clinical response may further reduce the cost of treating non-low-risk patients with febrile neutropenia.

  19. Comparison between a serum creatinine-and a cystatin C-based glomerular filtration rate equation in patients receiving amphotericin B.

    PubMed

    Karimzadeh, Iman; Khalili, Hossein

    2016-06-06

    Serum cystatin C (Cys C) has a number of advantages over serum creatinine in the evaluation of kidney function. Apart from Cys C level itself, several formulas have also been introduced in different clinical settings for the estimation of glomerular filtration rate (GFR) based upon serum Cys C level. The aim of the present study was to compare a serum Cys C-based equation with Cockcroft-Gault serum creatinine-based formula, both used in the calculation of GFR, in patients receiving amphotericin B. Fifty four adult patients with no history of acute or chronic kidney injury having been planned to receive conventional amphotericin B for an anticipated duration of at least 1 week for any indication were recruited. At three time points during amphotericin B treatment, including days 0, 7, and 14, serum cystatin C as well as creatinine levels were measured. GFR at the above time points was estimated by both creatinine (Cockcroft-Gault) and serum Cys C based equations. There was significant correlation between creatinine-based and Cys C-based GFR values at days 0 (R = 0.606, P = 0.001) and 7 (R = 0.714, P < 0.001). In contrast to GFR estimated by the Cockcroft-Gault equation, the mean (95 % confidence interval) Cys C-based GFR values at different studied time points were comparable within as well as between patients with and without amphotericin B nephrotoxicity. Our results suggested that the Gentian Cys C-based GFR equation correlated significantly with the Cockcroft-Gault formula at least at the early time period of treatment with amphotericin B. Graphical abstract Comparison between a serum creatinine-and a cystatin C-based glomerular filtration rate equation in patients receiving amphotericin B.

  20. Stated time preferences for health: a systematic review and meta analysis of private and social discount rates.

    PubMed

    Mahboub-Ahari, Alireza; Pourreza, Abolghasem; Sari, Ali Akbari; Rahimi Foroushani, Abbas; Heydari, Hassan

    2014-01-01

    The present study aimed to provide better insight on methodological issues related to time preference studies, and to estimate private and social discount rates, using a rigorous systematic review and meta-analysis. We searched PubMed, EMBASE and Proquest databases in June 2013. All studies had estimated private and social time preference rates for health outcomes through stated preference approach, recognized eligible for inclusion. We conducted both fixed and random effect meta-analyses using mean discount rate and standard deviation of the included studies. I-square statistics was used for testing heterogeneity of the studies. Private and social discount rates were estimated separately via Stata11 software. Out of 44 screened full texts, 8 population-based empirical studies were included in qualitative synthesis. Reported time preference rates for own health were from 0.036 to 0.07 and for social health from 0.04 to 0.2. Private and social discount rates were estimated at 0.056 (95% CI: 0.038, 0.074) and 0.066 (95% CI: 0.064, 0.068), respectively. Considering the impact of time preference on healthy behaviors and because of timing issues, individual's time preference as a key determinant of policy making should be taken into account. Direct translation of elicited discount rates to the official discount rates has been remained questionable. Decisions about the proper discount rate for health context, may need a cross-party consensus among health economists and policy makers.

  1. Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, R.; Thienel, J.; Oshman, Yaakov

    2004-01-01

    This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.

  2. Reconstruction and analysis of 137Cs fallout deposition patterns in the Marshall Islands.

    PubMed

    Whitcomb, Robert C

    2002-03-01

    Estimates of 137Cs deposition caused by fallout originating from nuclear weapons testing in the Marshall Islands have been estimated for several locations in the Marshall Islands. These retrospective estimates are based primarily on historical exposure rate and gummed film measurements. The methods used to reconstruct these deposition estimates are similar to those used in the National Cancer Institute study for reconstructing 131I deposition from the Nevada Test Site. Reconstructed cumulative deposition estimates are validated against contemporary measurements of 137Cs concentration in soil with account taken for estimated global fallout contributions. These validations show that the overall geometric bias in predicted-to-observed (P:O) ratios is 1.0 (indicating excellent agreement). The 5th to 95th percentile range of this distribution is 0.35-2.95. The P:O ratios for estimates using historical gummed film measurements tend to slightly overpredict more than estimates using exposure rate measurements. The deposition estimate methods, supported by the agreement between estimates and measurements, suggest that these methods can be used with confidence for other weapons testing fallout radionuclides.

  3. Predicting boundary shear stress and sediment transport over bed forms

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    To estimate bed-load sediment transport rates in flows over bed forms such as ripples and dunes, spatially averaged velocity profiles are frequently used to predict mean boundary shear stress. However, such averaging obscures the complex, nonlinear interaction of wake decay, boundary-layer development, and topographically induced acceleration downstream of flow separation and often leads to inaccurate estimates of boundary stress, particularly skin friction, which is critically important in predicting bed-load transport rates. This paper presents an alternative methodology for predicting skin friction over 2D bed forms. The approach is based on combining the equations describing the mechanics of the internal boundary layer with semiempirical structure functions to predict the velocity at the crest of a bedform, where the flow is most similar to a uniform boundary layer. Significantly, the methodology is directed toward making specific predictions only at the bed-form crest, and as a result it avoids the difficulty and questionable validity of spatial averaging. The model provides an accurate estimate of the skin friction at the crest where transport rates are highest. Simple geometric constraints can be used to derive the mean transport rates as long as bed load is dominant.To estimate bed-load sediment transport rates in flows over bed forms such as ripples and dunes, spatially averaged velocity profiles are frequently used to predict mean boundary shear stress. However, such averaging obscures the complex, nonlinear interaction of wake decay, boundary-layer development, and topographically induced acceleration downstream of flow separation and often leads to inaccurate estimates of boundary stress, particularly skin friction, which is critically important in predicting bed-load transport rates. This paper presents an alternative methodology for predicting skin friction over 2D bed forms. The approach is based on combining the equations describing the mechanics of the internal boundary layer with semiempirical structure functions to predict the velocity at the crest of a bedform, where the flow is most similar to a uniform boundary layer. Significantly, the methodology is directed toward making specific predictions only at the bed-form crest, and as a result it avoids the difficulty and questionable validity of spatial averaging. The model provides an accurate estimate of the skin friction at the crest where transport rates are highest. Simple geometric constraints can be used to derive the mean transport rates as long as bed load is dominant.

  4. Estimating the mutual information of an EEG-based Brain-Computer Interface.

    PubMed

    Schlögl, A; Neuper, C; Pfurtscheller, G

    2002-01-01

    An EEG-based Brain-Computer Interface (BCI) could be used as an additional communication channel between human thoughts and the environment. The efficacy of such a BCI depends mainly on the transmitted information rate. Shannon's communication theory was used to quantify the information rate of BCI data. For this purpose, experimental EEG data from four BCI experiments was analyzed off-line. Subjects imaginated left and right hand movements during EEG recording from the sensorimotor area. Adaptive autoregressive (AAR) parameters were used as features of single trial EEG and classified with linear discriminant analysis. The intra-trial variation as well as the inter-trial variability, the signal-to-noise ratio, the entropy of information, and the information rate were estimated. The entropy difference was used as a measure of the separability of two classes of EEG patterns.

  5. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.

    PubMed

    Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.

  6. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets

    PubMed Central

    Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363

  7. Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground

    NASA Technical Reports Server (NTRS)

    Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.

    2016-01-01

    GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.

  8. Using strain rates to forecast seismic hazards

    USGS Publications Warehouse

    Evans, Eileen

    2017-01-01

    One essential component in forecasting seismic hazards is observing the gradual accumulation of tectonic strain accumulation along faults before this strain is suddenly released as earthquakes. Typically, seismic hazard models are based on geologic estimates of slip rates along faults and historical records of seismic activity, neither of which records actively accumulating strain. But this strain can be estimated by geodesy: the precise measurement of tiny position changes of Earth’s surface, obtained from GPS, interferometric synthetic aperture radar (InSAR), or a variety of other instruments.

  9. Integration of manatee life-history data and population modeling

    USGS Publications Warehouse

    Eberhardt, L.L.; O'Shea, Thomas J.; O'Shea, Thomas J.; Ackerman, B.B.; Percival, H. Franklin

    1995-01-01

    Aerial counts and the number of deaths have been a major focus of attention in attempts to understand the population status of the Florida manatee (Trichechus manatus latirostris). Uncertainties associated with these data have made interpretation difficult. However, knowledge of manatee life-history attributes increased and now permits the development of a population model. We describe a provisional model based on the classical approach of Lotka. Parameters in the model are based on data from'other papers in this volume and draw primarily on observations from the Crystal River, Blue Spring, and Adantic Coast areas. The model estimates X (the finite rate ofincrease) at each study area, and application ofthe delta method provides estimates of variance components and partial derivatives ofX with respectto key input parameters (reproduction, adult survival, and early survival). In some study areas, only approximations of some parameters are available. Estimates of X and coefficients of variation (in parentheses) of manatees were 1.07 (0.009) in the Crystal River, 1.06 (0.012) at Blue Spring, and 1.01 (0.012) on the Atlantic Coast. Changing adult survival has a major effect on X. Early-age survival has the smallest effect. Bootstrap comparisons of population growth estimates from trend counts in the Crystal River and at Blue Spring and the reproduction and survival data suggest that the higher, observed rates from counts are probably not due to chance. Bootstrapping for variance estimates based on reproduction and survival data from manatees at Blue Spring and in the Crystal River provided estimates of X, adult survival, and rates of reproduction that were similar to those obtained by other methods. Our estimates are preliminary and suggestimprovements for future data collection and analysis. However, results support efforts to reduce mortality as the most effective means to promote the increased growth necessary for the eventual recovery of the Florida manatee population.

  10. Determining Source Strength of Semivolatile Organic Compounds using Measured Concentrations in Indoor Dust

    PubMed Central

    Shin, Hyeong-Moo; McKone, Thomas E.; Nishioka, Marcia G.; Fallin, M. Daniele; Croen, Lisa A.; Hertz-Picciotto, Irva; Newschaffer, Craig J.; Bennett, Deborah H.

    2014-01-01

    Consumer products and building materials emit a number of semivolatile organic compounds (SVOCs) in the indoor environment. Because indoor SVOCs accumulate in dust, we explore the use of dust to determine source strength and report here on analysis of dust samples collected in 30 U.S. homes for six phthalates, four personal care product ingredients, and five flame retardants. We then use a fugacity-based indoor mass-balance model to estimate the whole house emission rates of SVOCs that would account for the measured dust concentrations. Di-2-ethylhexyl phthalate (DEHP) and di-iso-nonyl phthalate (DiNP) were the most abundant compounds in these dust samples. On the other hand, the estimated emission rate of diethyl phthalate (DEP) is the largest among phthalates, although its dust concentration is over two orders of magnitude smaller than DEHP and DiNP. The magnitude of the estimated emission rate that corresponds to the measured dust concentration is found to be inversely correlated with the vapor pressure of the compound, indicating that dust concentrations alone cannot be used to determine which compounds have the greatest emission rates. The combined dust-assay modeling approach shows promise for estimating indoor emission rates for SVOCs. PMID:24118221

  11. The burden of road traffic crashes, injuries and deaths in Africa: a systematic review and meta-analysis

    PubMed Central

    Thompson, Jacqueline Y; Akanbi, Moses A; Azuh, Dominic; Samuel, Victoria; Omoregbe, Nicholas; Ayo, Charles K

    2016-01-01

    Abstract Objective To estimate the burden of road traffic injuries and deaths for all road users and among different road user groups in Africa. Methods We searched MEDLINE, EMBASE, Global Health, Google Scholar, websites of African road safety agencies and organizations for registry- and population-based studies and reports on road traffic injury and death estimates in Africa, published between 1980 and 2015. Available data for all road users and by road user group were extracted and analysed. We conducted a random-effects meta-analysis and estimated pooled rates of road traffic injuries and deaths. Findings We identified 39 studies from 15 African countries. The estimated pooled rate for road traffic injury was 65.2 per 100 000 population (95% confidence interval, CI: 60.8–69.5) and the death rate was 16.6 per 100 000 population (95% CI: 15.2–18.0). Road traffic injury rates increased from 40.7 per 100 000 population in the 1990s to 92.9 per 100 000 population between 2010 and 2015, while death rates decreased from 19.9 per 100 000 population in the 1990s to 9.3 per 100 000 population between 2010 and 2015. The highest road traffic death rate was among motorized four-wheeler occupants at 5.9 per 100 000 population (95% CI: 4.4–7.4), closely followed by pedestrians at 3.4 per 100 000 population (95% CI: 2.5–4.2). Conclusion The burden of road traffic injury and death is high in Africa. Since registry-based reports underestimate the burden, a systematic collation of road traffic injury and death data is needed to determine the true burden. PMID:27429490

  12. Long term monitoring of jaguars in the Cockscomb Basin Wildlife Sanctuary, Belize; Implications for camera trap studies of carnivores.

    PubMed

    Harmsen, Bart J; Foster, Rebecca J; Sanchez, Emma; Gutierrez-González, Carmina E; Silver, Scott C; Ostro, Linde E T; Kelly, Marcella J; Kay, Elma; Quigley, Howard

    2017-01-01

    In this study, we estimate life history parameters and abundance for a protected jaguar population using camera-trap data from a 14-year monitoring program (2002-2015) in Belize, Central America. We investigated the dynamics of this jaguar population using 3,075 detection events of 105 individual adult jaguars. Using robust design open population models, we estimated apparent survival and temporary emigration and investigated individual heterogeneity in detection rates across years. Survival probability was high and constant among the years for both sexes (φ = 0.78), and the maximum (conservative) age recorded was 14 years. Temporary emigration rate for the population was random, but constant through time at 0.20 per year. Detection probability varied between sexes, and among years and individuals. Heterogeneity in detection took the form of a dichotomy for males: those with consistently high detection rates, and those with low, sporadic detection rates, suggesting a relatively stable population of 'residents' consistently present and a fluctuating layer of 'transients'. Female detection was always low and sporadic. On average, twice as many males than females were detected per survey, and individual detection rates were significantly higher for males. We attribute sex-based differences in detection to biases resulting from social variation in trail-walking behaviour. The number of individual females detected increased when the survey period was extended from 3 months to a full year. Due to the low detection rates of females and the variable 'transient' male subpopulation, annual abundance estimates based on 3-month surveys had low precision. To estimate survival and monitor population changes in elusive, wide-ranging, low-density species, we recommend repeated surveys over multiple years; and suggest that continuous monitoring over multiple years yields even further insight into population dynamics of elusive predator populations.

  13. Interferon-based anti-viral therapy for hepatitis C virus infection after renal transplantation: an updated meta-analysis.

    PubMed

    Wei, Fang; Liu, Junying; Liu, Fen; Hu, Huaidong; Ren, Hong; Hu, Peng

    2014-01-01

    Hepatitis C virus (HCV) infection is highly prevalent in renal transplant (RT) recipients. Currently, interferon-based (IFN-based) antiviral therapies are the standard approach to control HCV infection. In a post-transplantation setting, however, IFN-based therapies appear to have limited efficacy and their use remains controversial. The present study aimed to evaluate the efficacy and safety of IFN-based therapies for HCV infection post RT. We searched Pubmed, Embase, Web of Knowledge, and The Cochrane Library (1997-2013) for clinical trials in which transplant patients were given Interferon (IFN), pegylated interferon (PEG), interferon plus ribavirin (IFN-RIB), or pegylated interferon plus ribavirin (PEG-RIB). The Sustained Virological Response (SVR) and/or drop-out rates were the primary outcomes. Summary estimates were calculated using the random-effects model of DerSimonian and Laird, with heterogeneity and sensitivity analysis. We identified 12 clinical trials (140 patients in total). The summary estimate for SVR rate, drop-out rate and graft rejection rate was 26.6% (95%CI, 15.0-38.1%), 21.1% (95% CI, 10.9-31.2%) and 4% (95%CI: 0.8%-7.1%), respectively. The overall SVR rate in PEG-based and standard IFN-based therapy was 40.6% (24/59) and 20.9% (17/81), respectively. The most frequent side-effect requiring discontinuation of treatment was graft dysfunction (14 cases, 45.1%). Meta-regression analysis showed the covariates included contribute to the heterogeneity in the SVR logit rate, but not in the drop-out logit rate. The sensitivity analyses by the random model yielded very similar results to the fixed-effects model. IFN-based therapy for HCV infection post RT has poor efficacy and limited safety. PEG-based therapy is a more effective approach for treating HCV infection post-RT than standard IFN-based therapy. Future research is required to develop novel strategies to improve therapeutic efficacy and tolerability, and reduce the liver-related morbidity and mortality in this important patient population.

  14. Lost in Translation: Public Policies, Evidence-Based Practice, and Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Dillenburger, Karola; McKerr, Lyn; Jordan, Julie-Ann

    2014-01-01

    Prevalence rates of autism spectrum disorder have risen dramatically over the past few decades (now estimated at 1:50 children). The estimated total annual cost to the public purse in the United States is US$137 billion, with an individual lifetime cost in the United Kingdom estimated at between £0.8 million and £1.23 million depending on the…

  15. Validating Microwave-Based Satellite Rain Rate Retrievals Over TRMM Ground Validation Sites

    NASA Astrophysics Data System (ADS)

    Fisher, B. L.; Wolff, D. B.

    2008-12-01

    Multi-channel, passive microwave instruments are commonly used today to probe the structure of rain systems and to estimate surface rainfall from space. Until the advent of meteorological satellites and the development of remote sensing techniques for measuring precipitation from space, there was no observational system capable of providing accurate estimates of surface precipitation on global scales. Since the early 1970s, microwave measurements from satellites have provided quantitative estimates of surface rainfall by observing the emission and scattering processes due to the existence of clouds and precipitation in the atmosphere. This study assesses the relative performance of microwave precipitation estimates from seven polar-orbiting satellites and the TRMM TMI using four years (2003-2006) of instantaneous radar rain estimates obtained from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The seven polar orbiters include three different sensor types: SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), and AMSR-E. The TMI aboard the TRMM satellite flies in a sun asynchronous orbit between 35 S and 35 N latitudes. The rain information from these satellites are combined and used to generate several multi-satellite rain products, namely the Goddard TRMM Multi-satellite Precipitation Analysis (TMPA), NOAA's CPC Morphing Technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). Instantaneous rain rates derived from each sensor were matched to the GV estimates in time and space at a resolution of 0.25 degrees. The study evaluates the measurement and error characteristics of the various satellite estimates through inter-comparisons with GV radar estimates. The GV rain observations provided an empirical ground-based reference for assessing the relative performance of each sensor and sensor class. Because the relative performance of the rain algorithms depends on the underlying surface terrain, the data for MELB was further stratified into ocean, land and coast categories using a 0.25 terrain mask. Relative to GV, AMSR-E and the TMI exhibited the highest correlation and skill over the full dynamic range of observed rain rates at both validation sites. The AMSU sensors, on the other hand, exhibited the lowest correlation and skill, though all sensors performed reasonably well compared to GV. The general tendency was for the microwave sensors to overestimate rain rates below 1 mm/hr where the sampling was highest and to underestimate the high rain rates above 10 mm/hr where the sampling was lowest. Underestimation of the low rain rate regime is attributed to difficulties of detecting and measuring low rain rates, while overestimation over the oceans was attributed largely to saturation of the brightness temperatures at high rain rates. Overall biases depended on the relative differences in the total rainfall at the extremes and the performance of each sensor at the nominal rain rates.

  16. The growth and population dynamics of seagrass Thalassia hemprichii in Suli Waters, Ambon Island

    NASA Astrophysics Data System (ADS)

    Tupan, C. I.; Uneputty, Pr A.

    2017-10-01

    The objectives of the research were to determined growth of rhizome, age structure, recruitment rate, and mortality rate of Thalassia hemprichii. Data were collected by using reconstruction technique which the measurements were based on past growth history. The age of seagrass was based on plastochrone interval. The recruitment rate was estimated by age structure of living shoots while mortality rate was estimated by age structure of dead shoots. The research was conducted on coastal waters of Suli where divided into two stations with different substrates, namely mixed substrates of sand and mud (S1) and mixed substrates of sand and coral fragment (S2). The growth rate of horizontal rhizome ranged from 4.15-8.68 cm.year-1 whereas the growth rate of vertical rhizome was 1.11-1.16 cm.year-1. The average age of T. hemprichii varied between 3.22-4.15 years. The youngest shoots were found at age 0.38 years and the oldest shoots were 7.82 years. Distribution of age was polymodal which reflecting cohort. The recruitment rate ranged from 0.23-0.54 year-1. Otherwise, the mortality rate ranged from 0.21-0.26 year-1.Seagrass population of T. hemprichii in Suli Waters indicated an increasing condition which shown by higher recruitment rate than mortality rate.

  17. Bank Regulation: Analysis of the Failure of Superior Bank, FSB, Hinsdale, Illinois

    DTIC Science & Technology

    2002-02-07

    statement of financial position based on the fair value . The best evidence of fair value is a quoted market price in an active market, but if there is no...market price, the value must be estimated. In estimating the fair value of retained interests, valuation techniques include estimating the present...about interest rates, default, prepayment, and volatility. In 1999, FASB explained that when estimating the fair value for 7FAS No. 140: Accounting for

  18. Measurement of acoustic velocity components in a turbulent flow using LDV and high-repetition rate PIV

    NASA Astrophysics Data System (ADS)

    Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank

    2017-06-01

    The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.

  19. Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer.

    PubMed

    Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua

    2016-04-14

    A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper.

  20. Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer

    PubMed Central

    Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua

    2016-01-01

    A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper. PMID:27089347

Top