Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.
1996-01-01
We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea
Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise
2017-08-25
The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Malaria transmission rates estimated from serological data.
Burattini, M. N.; Massad, E.; Coutinho, F. A.
1993-01-01
A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D
2014-04-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.
2014-01-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.
2011-01-01
Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.
Keall, Michael D; Newstead, Stuart
2016-01-01
Vehicle safety rating systems aim firstly to inform consumers about safe vehicle choices and, secondly, to encourage vehicle manufacturers to aspire to safer levels of vehicle performance. Primary rating systems (that measure the ability of a vehicle to assist the driver in avoiding crashes) have not been developed for a variety of reasons, mainly associated with the difficult task of disassociating driver behavior and vehicle exposure characteristics from the estimation of crash involvement risk specific to a given vehicle. The aim of the current study was to explore different approaches to primary safety estimation, identifying which approaches (if any) may be most valid and most practical, given typical data that may be available for producing ratings. Data analyzed consisted of crash data and motor vehicle registration data for the period 2003 to 2012: 21,643,864 observations (representing vehicle-years) and 135,578 crashed vehicles. Various logistic models were tested as a means to estimate primary safety: Conditional models (conditioning on the vehicle owner over all vehicles owned); full models not conditioned on the owner, with all available owner and vehicle data; reduced models with few variables; induced exposure models; and models that synthesised elements from the latter two models. It was found that excluding young drivers (aged 25 and under) from all primary safety estimates attenuated some high risks estimated for make/model combinations favored by young people. The conditional model had clear biases that made it unsuitable. Estimates from a reduced model based just on crash rates per year (but including an owner location variable) produced estimates that were generally similar to the full model, although there was more spread in the estimates. The best replication of the full model estimates was generated by a synthesis of the reduced model and an induced exposure model. This study compared approaches to estimating primary safety that could mimic an analysis based on a very rich data set, using variables that are commonly available when registered fleet data are linked to crash data. This exploratory study has highlighted promising avenues for developing primary safety rating systems for vehicle makes and models.
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
The Galactic Nova Rate Revisited
NASA Astrophysics Data System (ADS)
Shafter, A. W.
2017-01-01
Despite its fundamental importance, a reliable estimate of the Galactic nova rate has remained elusive. Here, the overall Galactic nova rate is estimated by extrapolating the observed rate for novae reaching m≤slant 2 to include the entire Galaxy using a two component disk plus bulge model for the distribution of stars in the Milky Way. The present analysis improves on previous work by considering important corrections for incompleteness in the observed rate of bright novae and by employing a Monte Carlo analysis to better estimate the uncertainty in the derived nova rates. Several models are considered to account for differences in the assumed properties of bulge and disk nova populations and in the absolute magnitude distribution. The simplest models, which assume uniform properties between bulge and disk novae, predict Galactic nova rates of ˜50 to in excess of 100 per year, depending on the assumed incompleteness at bright magnitudes. Models where the disk novae are assumed to be more luminous than bulge novae are explored, and predict nova rates up to 30% lower, in the range of ˜35 to ˜75 per year. An average of the most plausible models yields a rate of {50}-23+31 yr-1, which is arguably the best estimate currently available for the nova rate in the Galaxy. Virtually all models produce rates that represent significant increases over recent estimates, and bring the Galactic nova rate into better agreement with that expected based on comparison with the latest results from extragalactic surveys.
Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma
2014-01-01
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.
Nilsen, Erlend B; Strand, Olav
2018-01-01
We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
Dornburg, Alex; Brandley, Matthew C; McGowen, Michael R; Near, Thomas J
2012-02-01
Various nucleotide substitution models have been developed to accommodate among lineage rate heterogeneity, thereby relaxing the assumptions of the strict molecular clock. Recently developed "uncorrelated relaxed clock" and "random local clock" (RLC) models allow decoupling of nucleotide substitution rates between descendant lineages and are thus predicted to perform better in the presence of lineage-specific rate heterogeneity. However, it is uncertain how these models perform in the presence of punctuated shifts in substitution rate, especially between closely related clades. Using cetaceans (whales and dolphins) as a case study, we test the performance of these two substitution models in estimating both molecular rates and divergence times in the presence of substantial lineage-specific rate heterogeneity. Our RLC analyses of whole mitochondrial genome alignments find evidence for up to ten clade-specific nucleotide substitution rate shifts in cetaceans. We provide evidence that in the uncorrelated relaxed clock framework, a punctuated shift in the rate of molecular evolution within a subclade results in posterior rate estimates that are either misled or intermediate between the disparate rate classes present in baleen and toothed whales. Using simulations, we demonstrate abrupt changes in rate isolated to one or a few lineages in the phylogeny can mislead rate and age estimation, even when the node of interest is calibrated. We further demonstrate how increasing prior age uncertainty can bias rate and age estimates, even while the 95% highest posterior density around age estimates decreases; in other words, increased precision for an inaccurate estimate. We interpret the use of external calibrations in divergence time studies in light of these results, suggesting that rate shifts at deep time scales may mislead inferences of absolute molecular rates and ages.
Optimal firing rate estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
2001-01-01
We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.
Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny
Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.
2012-01-01
An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will bias estimates of divergence time, we suggest mutation rate estimates be used until better models are available. PMID:22683811
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Improving estimates of tree mortality probability using potential growth rate
Das, Adrian J.; Stephenson, Nathan L.
2015-01-01
Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.
Estimating Infiltration Rates for a Loessal Silt Loam Using Soil Properties
M. Dean Knighton
1978-01-01
Soil properties were related to infiltration rates as measured by single-ringsteady-head infiltometers. The properties showing strong simple correlations were identified. Regression models were developed to estimate infiltration rate from several soil properties. The best model gave fair agreement to measured rates at another location.
A projection of lesser prairie chicken (Tympanuchus pallidicinctus) populations range-wide
Cummings, Jonathan W.; Converse, Sarah J.; Moore, Clinton T.; Smith, David R.; Nichols, Clay T.; Allan, Nathan L.; O'Meilia, Chris M.
2017-08-09
We built a population viability analysis (PVA) model to predict future population status of the lesser prairie-chicken (Tympanuchus pallidicinctus, LEPC) in four ecoregions across the species’ range. The model results will be used in the U.S. Fish and Wildlife Service's (FWS) Species Status Assessment (SSA) for the LEPC. Our stochastic projection model combined demographic rate estimates from previously published literature with demographic rate estimates that integrate the influence of climate conditions. This LEPC PVA projects declining populations with estimated population growth rates well below 1 in each ecoregion regardless of habitat or climate change. These results are consistent with estimates of LEPC population growth rates derived from other demographic process models. Although the absolute magnitude of the decline is unlikely to be as low as modeling tools indicate, several different lines of evidence suggest LEPC populations are declining.
Incorporating harvest rates into the sex-age-kill model for white-tailed deer
Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.
2013-01-01
Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining appropriate sample sizes, by management unit, to estimate harvest rate parameters each year may be too expensive, assumptions of constant annual harvest rates may be necessary. However, if changes in harvest regulations or hunter behavior influence subadult male harvest rates, the PASAK model could provide an unreliable index to population changes.
Green, Christopher T.; Jurgens, Bryant; Zhang, Yong; Starn, Jeffrey; Singleton, Michael J.; Esser, Bradley K.
2016-01-01
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O2 reduction and denitrification (NO3− reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwater age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF6, CFCs, 3H, He from 3H (tritiogenic He),14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO3− and dissolved gas data to estimate zero order and first order rates of O2 reduction and denitrification. Results indicated that O2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O2 and NO3− reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O2 reduction rates. Estimated historical NO3− trends were similar to historical measurements. Results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Christopher T.; Jurgens, Bryant C.; Zhang, Yong
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O 2 reduction and denitrification (NO 3 – reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwatermore » age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF 6, CFCs, 3H, He from 3H (tritiogenic He), 14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO 3 – and dissolved gas data to estimate zero order and first order rates of O 2 reduction and denitrification. Results indicated that O 2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O 2 and NO 3 – reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O 2 reduction rates. Estimated historical NO 3 – trends were similar to historical measurements. Here, results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O 2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.« less
Green, Christopher T.; Jurgens, Bryant C.; Zhang, Yong; ...
2016-05-14
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O 2 reduction and denitrification (NO 3 – reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwatermore » age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF 6, CFCs, 3H, He from 3H (tritiogenic He), 14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO 3 – and dissolved gas data to estimate zero order and first order rates of O 2 reduction and denitrification. Results indicated that O 2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O 2 and NO 3 – reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O 2 reduction rates. Estimated historical NO 3 – trends were similar to historical measurements. Here, results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O 2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.« less
Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-09-01
Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.
Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-01-01
Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.
Robinson, John D; Hall, David W; Wares, John P
2013-05-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.
Grant, Evan H. Campbell; Zipkin, Elise; Scott, Sillett T.; Chandler, Richard; Royle, J. Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales.
Hisano, Mizue; Connolly, Sean R; Robbins, William D
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.
Hisano, Mizue; Connolly, Sean R.; Robbins, William D.
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402
Petrović, Jelena; Dragović, Snežana; Dragović, Ranko; Đorđević, Milan; Đokić, Mrđan; Zlatković, Bojan; Walling, Desmond
2016-07-01
The need for reliable assessments of soil erosion rates in Serbia has directed attention to the potential for using (137)Cs measurements to derive estimates of soil redistribution rates. Since, to date, this approach has not been applied in southeastern Serbia, a reconnaissance study was undertaken to confirm its viability. The need to take account of the occurrence of substantial Chernobyl fallout was seen as a potential problem. Samples for (137)Cs measurement were collected from a zone of uncultivated soils in the watersheds of Pčinja and South Morava Rivers, an area with known high soil erosion rates. Two theoretical conversion models, the profile distribution (PD) model and diffusion and migration (D&M) model were used to derive estimates of soil erosion and deposition rates from the (137)Cs measurements. The estimates of soil redistribution rates derived by using the PD and D&M models were found to differ substantially and this difference was ascribed to the assumptions of the simpler PD model that cause it to overestimate rates of soil loss. The results provided by the D&M model were judged to more reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inferring invasive species abundance using removal data from management actions
Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.
2016-01-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.
Incorporating movement patterns to improve survival estimates for juvenile bull trout
Bowerman, Tracy; Budy, Phaedra
2012-01-01
Populations of many fish species are sensitive to changes in vital rates during early life stages, but our understanding of the factors affecting growth, survival, and movement patterns is often extremely limited for juvenile fish. These critical information gaps are particularly evident for bull trout Salvelinus confluentus, a threatened Pacific Northwest char. We combined several active and passive mark–recapture and resight techniques to assess migration rates and estimate survival for juvenile bull trout (70–170 mm total length). We evaluated the relative performance of multiple survival estimation techniques by comparing results from a common Cormack–Jolly–Seber (CJS) model, the less widely used Barker model, and a simple return rate (an index of survival). Juvenile bull trout of all sizes emigrated from their natal habitat throughout the year, and thereafter migrated up to 50 km downstream. With the CJS model, high emigration rates led to an extreme underestimate of apparent survival, a combined estimate of site fidelity and survival. In contrast, the Barker model, which allows survival and emigration to be modeled as separate parameters, produced estimates of survival that were much less biased than the return rate. Estimates of age-class-specific annual survival from the Barker model based on all available data were 0.218±0.028 (estimate±SE) for age-1 bull trout and 0.231±0.065 for age-2 bull trout. This research demonstrates the importance of incorporating movement patterns into survival analyses, and we provide one of the first field-based estimates of juvenile bull trout annual survival in relatively pristine rearing conditions. These estimates can provide a baseline for comparison with future studies in more impacted systems and will help managers develop reliable stage-structured population models to evaluate future recovery strategies.
Impact of transverse and longitudinal dispersion on first-order degradation rate constant estimation
NASA Astrophysics Data System (ADS)
Stenback, Greg A.; Ong, Say Kee; Rogers, Shane W.; Kjartanson, Bruce H.
2004-09-01
A two-dimensional analytical model is employed for estimating the first-order degradation rate constant of hydrophobic organic compounds (HOCs) in contaminated groundwater under steady-state conditions. The model may utilize all aqueous concentration data collected downgradient of a source area, but does not require that any data be collected along the plume centerline. Using a least squares fit of the model to aqueous concentrations measured in monitoring wells, degradation rate constants were estimated at a former manufactured gas plant (FMGP) site in the Midwest U.S. The estimated degradation rate constants are 0.0014, 0.0034, 0.0031, 0.0019, and 0.0053 day -1 for acenaphthene, naphthalene, benzene, ethylbenzene, and toluene, respectively. These estimated rate constants were as low as one-half those estimated with the one-dimensional (centerline) approach of Buscheck and Alcantar [Buscheck, T.E., Alcantar, C.M., 1995. Regression techniques and analytical solutions to demonstrate intrinsic bioremediation. In: Hinchee, R.E., Wilson, J.T., Downey, D.C. (Eds.), Intrinsic Bioremediation, Battelle Press, Columbus, OH, pp. 109-116] which does not account for transverse dispersivity. Varying the transverse and longitudinal dispersivity values over one order of magnitude for toluene data obtained from the FMGP site resulted in nearly a threefold variation in the estimated degradation rate constant—highlighting the importance of reliable estimates of the dispersion coefficients for obtaining reasonable estimates of the degradation rate constants. These results have significant implications for decision making and site management where overestimation of a degradation rate may result in remediation times and bioconversion factors that exceed expectations. For a complex source area or non-steady-state plume, a superposition of analytical models that incorporate longitudinal and transverse dispersion and time may be used at sites where the centerline method would not be applicable.
Nichols, J.D.; Pollock, K.H.
1983-01-01
Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
Whittington, Jesse; Sawaya, Michael A
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.
2009-01-01
A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.
Constructing stage-structured matrix population models from life tables: comparison of methods
Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate (λ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism’s life history type and the true values of λ. In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ. Here, the weights are given by survivorship curve after discounting with λ. To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses. PMID:29085763
Constructing stage-structured matrix population models from life tables: comparison of methods.
Fujiwara, Masami; Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate ( λ ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism's life history type and the true values of λ . In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ . Here, the weights are given by survivorship curve after discounting with λ . To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses.
Small area estimation for estimating the number of infant mortality in West Java, Indonesia
NASA Astrophysics Data System (ADS)
Anggreyani, Arie; Indahwati, Kurnia, Anang
2016-02-01
Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.
The quantitative genetics of maximal and basal rates of oxygen consumption in mice.
Dohm, M R; Hayes, J P; Garland, T
2001-01-01
A positive genetic correlation between basal metabolic rate (BMR) and maximal (VO(2)max) rate of oxygen consumption is a key assumption of the aerobic capacity model for the evolution of endothermy. We estimated the genetic (V(A), additive, and V(D), dominance), prenatal (V(N)), and postnatal common environmental (V(C)) contributions to individual differences in metabolic rates and body mass for a genetically heterogeneous laboratory strain of house mice (Mus domesticus). Our breeding design did not allow the simultaneous estimation of V(D) and V(N). Regardless of whether V(D) or V(N) was assumed, estimates of V(A) were negative under the full models. Hence, we fitted reduced models (e.g., V(A) + V(N) + V(E) or V(A) + V(E)) and obtained new variance estimates. For reduced models, narrow-sense heritability (h(2)(N)) for BMR was <0.1, but estimates of h(2)(N) for VO(2)max were higher. When estimated with the V(A) + V(E) model, the additive genetic covariance between VO(2)max and BMR was positive and statistically different from zero. This result offers tentative support for the aerobic capacity model for the evolution of vertebrate energetics. However, constraints imposed on the genetic model may cause our estimates of additive variance and covariance to be biased, so our results should be interpreted with caution and tested via selection experiments. PMID:11560903
Cozza, Izabela Campos; Zanetta, Dirce Maria Trevisan; Fernandes, Frederico Leon Arrabal; da Rocha, Francisco Marcelo Monteiro; de Andre, Paulo Afonso; Garcia, Maria Lúcia Bueno; Paceli, Renato Batista; Prado, Gustavo Faibischew; Terra-Filho, Mario; do Nascimento Saldiva, Paulo Hilário; de Paula Santos, Ubiratan
2015-07-01
The effects of air pollution on health are associated with the amount of pollutants inhaled which depends on the environmental concentration and the inhaled air volume. It has not been clear whether statistical models of the relationship between heart rate and ventilation obtained using laboratory cardiopulmonary exercise test (CPET) can be applied to an external group to estimate ventilation. To develop and evaluate a model to estimate respiratory ventilation based on heart rate for inhaled load of pollutant assessment in field studies. Sixty non-smoking men; 43 public street workers (public street group) and 17 employees of the Forest Institute (park group) performed a maximum cardiopulmonary exercise test (CPET). Regression equation models were constructed with the heart rate and natural logarithmic of minute ventilation data obtained on CPET. Ten individuals were chosen randomly (public street group) and were used for external validation of the models (test group). All subjects also underwent heart rate register, and particulate matter (PM2.5) monitoring for a 24-hour period. For the public street group, the median difference between estimated and observed data was 0.5 (CI 95% -0.2 to 1.4) l/min and for the park group was 0.2 (CI 95% -0.2 to 1.2) l/min. In the test group, estimated values were smaller than the ones observed in the CPET, with a median difference of -2.4 (CI 95% -4.2 to -1.8) l/min. The mixed model estimated values suggest that this model is suitable for situations in which heart rate is around 120-140bpm. The mixed effect model is suitable for ventilation estimate, with good accuracy when applied to homogeneous groups, suggesting that, in this case, the model could be used in field studies to estimate ventilation. A small but significant difference in the median of external validation estimates was observed, suggesting that the applicability of the model to external groups needs further evaluation. Copyright © 2015 Elsevier B.V. All rights reserved.
Testing the accuracy of a 1-D volcanic plume model in estimating mass eruption rate
Mastin, Larry G.
2014-01-01
During volcanic eruptions, empirical relationships are used to estimate mass eruption rate from plume height. Although simple, such relationships can be inaccurate and can underestimate rates in windy conditions. One-dimensional plume models can incorporate atmospheric conditions and give potentially more accurate estimates. Here I present a 1-D model for plumes in crosswind and simulate 25 historical eruptions where plume height Hobs was well observed and mass eruption rate Mobs could be calculated from mapped deposit mass and observed duration. The simulations considered wind, temperature, and phase changes of water. Atmospheric conditions were obtained from the National Center for Atmospheric Research Reanalysis 2.5° model. Simulations calculate the minimum, maximum, and average values (Mmin, Mmax, and Mavg) that fit the plume height. Eruption rates were also estimated from the empirical formula Mempir = 140Hobs4.14 (Mempir is in kilogram per second, Hobs is in kilometer). For these eruptions, the standard error of the residual in log space is about 0.53 for Mavg and 0.50 for Mempir. Thus, for this data set, the model is slightly less accurate at predicting Mobs than the empirical curve. The inability of this model to improve eruption rate estimates may lie in the limited accuracy of even well-observed plume heights, inaccurate model formulation, or the fact that most eruptions examined were not highly influenced by wind. For the low, wind-blown plume of 14–18 April 2010 at Eyjafjallajökull, where an accurate plume height time series is available, modeled rates do agree better with Mobs than Mempir.
Estimating the effect of a rare time-dependent treatment on the recurrent event rate.
Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E
2018-05-30
In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Survival and recovery rates of American woodcock banded in Michigan
Krementz, David G.; Hines, James E.; Luukkonen, David R.
2003-01-01
American woodcock (Scolopax minor) population indices have declined since U.S. Fish and Wildlife Service (USFWS) monitoring began in 1968. Management to stop and/or reverse this population trend has been hampered by the lack of recent information on woodcock population parameters. Without recent information on survival rate trends, managers have had to assume that the recent declines in recruitment indices are the only parameter driving woodcock declines. Using program MARK, we estimated annual survival and recovery rates of adult and juvenile American woodcock, and estimated summer survival of local (young incapable of sustained flight) woodcock banded in Michigan between 1978 and 1998. We constructed a set of candidate models from a global model with age (local, juvenile, adult) and time (year)-dependent survival and recovery rates to no age or time-dependent survival and recovery rates. Five models were supported by the data, with all models suggesting that survival rates differed among age classes, and 4 models had survival rates that were constant over time. The fifth model suggested that juvenile and adult survival rates were linear on a logit scale over time. Survival rates averaged over likelihood-weighted model results were 0.8784 +/- 0.1048 (SE) for locals, 0.2646 +/- 0.0423 (SE) for juveniles, and 0.4898 +/- 0.0329 (SE) for adults. Weighted average recovery rates were 0.0326 +/- 0.0053 (SE) for juveniles and 0.0313 +/- 0.0047 (SE) for adults. Estimated differences between our survival estimates and those from prior years were small, and our confidence around those differences was variable and uncertain. juvenile survival rates were low.
Input-decomposition balance of heterotrophic processes in a warm-temperate mixed forest in Japan
NASA Astrophysics Data System (ADS)
Jomura, M.; Kominami, Y.; Ataka, M.; Makita, N.; Dannoura, M.; Miyama, T.; Tamai, K.; Goto, Y.; Sakurai, S.
2010-12-01
Carbon accumulation in forest ecosystem has been evaluated using three approaches. One is net ecosystem exchange (NEE) estimated by tower flux measurement. The second is net ecosystem production (NEP) estimated by biometric measurements. NEP can be expressed as the difference between net primary production and heterotrophic respiration. NEP can also be expressed as the annual increment in the plant biomass (ΔW) plus soil (ΔS) carbon pools defined as follows; NEP = ΔW+ΔS The third approach needs to evaluate annual carbon increment in soil compartment. Soil carbon accumulation rate could not be measured directly in a short term because of the small amount of annual accumulation. Soil carbon accumulation rate can be estimated by a model calculation. Rothamsted carbon model is a soil organic carbon turnover model and a useful tool to estimate the rate of soil carbon accumulation. However, the model has not sufficiently included variations in decomposition processes of organic matters in forest ecosystems. Organic matter in forest ecosystems have a different turnover rate that creates temporal variations in input-decomposition balance and also have a large variation in spatial distribution. Thus, in order to estimate the rate of soil carbon accumulation, temporal and spatial variation in input-decomposition balance of heterotrophic processes should be incorporated in the model. In this study, we estimated input-decomposition balance and the rate of soil carbon accumulation using the modified Roth-C model. We measured respiration rate of many types of organic matters, such as leaf litter, fine root litter, twigs and coarse woody debris using a chamber method. We can illustrate the relation of respiration rate to diameter of organic matters. Leaf and fine root litters have no diameter, so assumed to be zero in diameter. Organic matters in small size, such as leaf and fine root litter, have high decomposition respiration. It could be caused by the difference in structure of organic matter. Because coarse woody debris has shape of cylinder, microbes decompose from the surface of it. Thus, respiration rate of coarse woody debris is lower than that of leaf and fine root litter. Based on this result, we modified Roth-C model and estimate soil carbon accumulation rate in recent years. Based on the results from a soil survey, the forest soil stored 30tC ha-1 in O and A horizon. We can evaluate the modified model using this result. NEP can be expressed as the annual increment in the plant biomass plus soil carbon pools. So if we can estimate NEP using this approach, then we can evaluate NEP estimated by micrometeorological and ecological approaches and reduce uncertainty of NEP estimation.
Estimation of demographic parameters in a tiger population from long-term camera trap data
Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas
2011-01-01
Chapter 7 (Karanth et al.) illustrated the use of camera trapping in combination with closed population capture–recapture (CR) models to estimate densities of tigers Panthera tigris. Such estimates can be very useful for investigating variation across space for a particular species (e.g., Karanth et al. 2004) or variation among species at a specific location. In addition, estimates of density continued at the same site(s) over multiple years are very useful for understanding and managing populations of large carnivores. Such multi-year studies can yield estimates of rates of change in abundance. Additionally, because the fates of marked individuals are tracked through time, biologists can delve deeper into factors driving changes in abundance such as rates of survival, recruitment and movement (Williams et al. 2002). Fortunately, modern CR approaches permit the modeling of populations that change between sampling occasions as a result of births, deaths, immigration and emigration (Pollock et al. 1990; Nichols 1992). Some of these early “open population” models focused on estimation of survival rates and, to a lesser extent, abundance, but more recent models permit estimation of recruitment and movement rates as well.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Zipkin, Elise F; Sillett, T Scott; Grant, Evan H Campbell; Chandler, Richard B; Royle, J Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales. PMID:24634726
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Braun, J.
Refrigerant mass flow rate is an important measurement for monitoring equipment performance and enabling fault detection and diagnostics. However, a traditional mass flow meter is expensive to purchase and install. A virtual refrigerant mass flow sensor (VRMF) uses a mathematical model to estimate flow rate using low-cost measurements and can potentially be implemented at low cost. This study evaluates three VRMFs for estimating refrigerant mass flow rate. The first model uses a compressor map that relates refrigerant flow rate to measurements of inlet and outlet pressure, and inlet temperature measurements. The second model uses an energy-balance method on the compressormore » that uses a compressor map for power consumption, which is relatively independent of compressor faults that influence mass flow rate. The third model is developed using an empirical correlation for an electronic expansion valve (EEV) based on an orifice equation. The three VRMFs are shown to work well in estimating refrigerant mass flow rate for various systems under fault-free conditions with less than 5% RMS error. Each of the three mass flow rate estimates can be utilized to diagnose and track the following faults: 1) loss of compressor performance, 2) fouled condenser or evaporator filter, 3) faulty expansion device, respectively. For example, a compressor refrigerant flow map model only provides an accurate estimation when the compressor operates normally. When a compressor is not delivering the expected flow due to a leaky suction or discharge valve or other internal fault, the energy-balance or EEV model can provide accurate flow estimates. In this paper, the flow differences provide an indication of loss of compressor performance and can be used for fault detection and diagnostics.« less
Inferring invasive species abundance using removal data from management actions.
Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M
2016-10-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
Nichols, James D.; Pollock, Kenneth H.; Hines, James E.
1984-01-01
The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).
Advanced techniques for modeling avian nest survival
Dinsmore, S.J.; White, Gary C.; Knopf, F.L.
2002-01-01
Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models and allowed us to generate more biologically meaningful estimates of nest survival.
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
DOT National Transportation Integrated Search
2001-09-01
In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
A comparison of selected models for estimating cable icing
NASA Astrophysics Data System (ADS)
McComber, Pierre; Druez, Jacques; Laflamme, Jean
In many cold climate countries, it is becoming increasingly important to monitor transmission line icing. Indeed, by knowing in advance of localized danger for icing overloads, electric utilities can take measures in time to prevent generalized failure of the power transmission network. Recently in Canada, a study was made to compare the estimation of a few icing models working from meteorological data in estimating ice loads for freezing rain events. The models tested were using only standard meteorological parameters, i.e. wind speed and direction, temperature and precipitation rate. This study has shown that standard meteorological parameters can only achieve very limited accuracy, especially for longer icing events. However, with the help of an additional instrument monitoring the icing rate intensity, a significant improvement in model prediction might be achieved. The icing rate meter (IRM) which counts icing and de-icing cycles per unit time on a standard probe can be used to estimate the icing intensity. A cable icing estimation is then made by taking into consideration the accretion size, temperature, wind speed and direction, and precipitation rate. In this paper, a comparison is made between the predictions of two previously tested models (one obtained and the other reconstructed from their description in the public literature) and of a model based on the icing rate meter readings. The models are tested against nineteen events recorded on an icing test line at Mt. Valin, Canada, during the winter season 1991-1992. These events are mostly rime resulting from in-cloud icing. However, freezing rain and wet snow events were also recorded. Results indicate that a significant improvement in the estimation is attained by using the icing rate meter data together with the other standard meteorological parameters.
Whittington, Jesse; Sawaya, Michael A.
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262
Karakas, Filiz; Imamoglu, Ipek
2017-04-01
This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.
Estimation of the Dose and Dose Rate Effectiveness Factor
NASA Technical Reports Server (NTRS)
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Petersen, J.H.; Ward, D.L.
1999-01-01
A bioenergetics model was developed and corroborated for northern pikeminnow Ptychocheilus oregonensis, an important predator on juvenile salmonids in the Pacific Northwest. Predictions of modeled predation rate on salmonids were compared with field data from three areas of John Day Reservoir (Columbia River). To make bioenergetics model estimates of predation rate, three methods were used to approximate the change in mass of average predators during 30-d growth periods: observed change in mass between the first and the second month, predicted change in mass calculated with seasonal growth rates, and predicted change in mass based on an annual growth model. For all reservoir areas combined, bioenergetics model predictions of predation on salmon were 19% lower than field estimates based on observed masses, 45% lower than estimates based on seasonal growth rates, and 15% lower than estimates based on the annual growth model. For each growth approach, the largest differences in field-versus-model predation occurred at the midreservoir area (-84% to -67% difference). Model predictions of the rate of predation on salmonids were examined for sensitivity to parameter variation, swimming speed, sampling bias caused by gear selectivity, and asymmetric size distributions of predators. The specific daily growth rate of northern pikeminnow predicted by the model was highest in July and October and decreased during August. The bioenergetics model for northern pikeminnow performed well compared with models for other fish species that have been tested with field data. This model should be a useful tool for evaluating management actions such as predator removal, examining the influence of temperature on predation rates, and exploring interactions between predators in the Columbia River basin.
Dynamics of newly established elk populations
Sargeant, G.A.; Oehler, M.W.
2007-01-01
The dynamics of newly established elk (Cervus elaphus) populations can provide insights about maximum sustainable rates of reproduction, survival, and increase. However, data used to estimate rates of increase typically have been limited to counts and rarely have included complementary estimates of vital rates. Complexities of population dynamics cannot be understood without considering population processes as well as population states. We estimated pregnancy rates, survival rates, age ratios, and sex ratios for reintroduced elk at Theodore Roosevelt National Park, North Dakota, USA; combined vital rates in a population projection model; and compared model projections with observed elk numbers and population ratios. Pregnancy rates in January (early in the second trimester of pregnancy) averaged 54.1% (SE = 5.4%) for subadults and 91.0% (SE = 1.7%) for adults, and 91.6% of pregnancies resulted in recruitment at 8 months. Annual survival rates of adult females averaged 0.96 (95% CI = 0.94-0.98) with hunting included and 0.99 (95% CI = 0.97-0.99) with hunting excluded from calculations. Our fitted model explained 99.8% of past variation in population estimates and represents a useful new tool for short-term management planning. Although we found no evidence of temporal variation in vital rates, variation in population composition caused substantial variation in projected rates of increase (??=1.20-1.36). Restoring documented hunter harvests and removals of elk by the National Park Service led to a potential rate of ?? = 1.26. Greater rates of increase substantiated elsewhere were within the expected range of chance variation, given our model and estimates of vital rates. Rates of increase realized by small elk populations are too variable to support inferences about habitat quality or density dependence.
This work summarizes advancements made that allow for better estimation of resting metabolic rate (RMR) and subsequent estimation of ventilation rates (i.e., total ventilation (VE) and alveolar ventilation (VA)) for individuals of both genders and all ages. ...
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Molnár, Péter K; Klanjscek, Tin; Derocher, Andrew E; Obbard, Martyn E; Lewis, Mark A
2009-08-01
Many species experience large fluctuations in food availability and depend on energy from fat and protein stores for survival, reproduction and growth. Body condition and, more specifically, energy stores thus constitute key variables in the life history of many species. Several indices exist to quantify body condition but none can provide the amount of stored energy. To estimate energy stores in mammals, we propose a body composition model that differentiates between structure and storage of an animal. We develop and parameterize the model specifically for polar bears (Ursus maritimus Phipps) but all concepts are general and the model could be easily adapted to other mammals. The model provides predictive equations to estimate structural mass, storage mass and storage energy from an appropriately chosen measure of body length and total body mass. The model also provides a means to estimate basal metabolic rates from body length and consecutive measurements of total body mass. Model estimates of body composition, structural mass, storage mass and energy density of 970 polar bears from Hudson Bay were consistent with the life history and physiology of polar bears. Metabolic rate estimates of fasting adult males derived from the body composition model corresponded closely to theoretically expected and experimentally measured metabolic rates. Our method is simple, non-invasive and provides considerably more information on the energetic status of individuals than currently available methods.
James D. Nichols; Scott T. Sillett; James E. Hines; Richard T. Holmes
2005-01-01
Recent developments in the modeling of capture-recapture data permit the direct estimation and modeling of population growth rate Pradel (1996). Resulting estimates reflect changes in numbers of birds on study areas, and such changes result from movement as well as survival and reproductive recruitment. One measure of the “importance” of a...
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Rawlins, B G; Scheib, C; Tyler, A N; Beamish, D
2012-12-01
Regulatory authorities need ways to estimate natural terrestrial gamma radiation dose rates (nGy h⁻¹) across the landscape accurately, to assess its potential deleterious health effects. The primary method for estimating outdoor dose rate is to use an in situ detector supported 1 m above the ground, but such measurements are costly and cannot capture the landscape-scale variation in dose rates which are associated with changes in soil and parent material mineralogy. We investigate the potential for improving estimates of terrestrial gamma dose rates across Northern Ireland (13,542 km²) using measurements from 168 sites and two sources of ancillary data: (i) a map based on a simplified classification of soil parent material, and (ii) dose estimates from a national-scale, airborne radiometric survey. We used the linear mixed modelling framework in which the two ancillary variables were included in separate models as fixed effects, plus a correlation structure which captures the spatially correlated variance component. We used a cross-validation procedure to determine the magnitude of the prediction errors for the different models. We removed a random subset of 10 terrestrial measurements and formed the model from the remainder (n = 158), and then used the model to predict values at the other 10 sites. We repeated this procedure 50 times. The measurements of terrestrial dose vary between 1 and 103 (nGy h⁻¹). The median absolute model prediction errors (nGy h⁻¹) for the three models declined in the following order: no ancillary data (10.8) > simple geological classification (8.3) > airborne radiometric dose (5.4) as a single fixed effect. Estimates of airborne radiometric gamma dose rate can significantly improve the spatial prediction of terrestrial dose rate.
Park, Soon-Ung; Lee, In-Hye; Ju, Jae-Won; Joo, Seung Jin
2016-10-01
A methodology for the estimation of the emission rate of 137 Cs by the Lagrangian Particle Dispersion Model (LPDM) with the use of monitored 137 Cs concentrations around a nuclear power plant has been developed. This method has been employed with the MM5 meteorological model in the 600 km × 600 km model domain with the horizontal grid scale of 3 km × 3 km centered at the Fukushima nuclear power plant to estimate 137 Cs emission rate for the accidental period from 00 UTC 12 March to 00 UTC 6 April 2011. The Lagrangian Particles are released continuously with the rate of one particle per minute at the first level modelled, about 15 m above the power plant site. The presently developed method was able to simulate quite reasonably the estimated 137 Cs emission rate compared with other studies, suggesting the potential usefulness of the present method for the estimation of the emission rate from the accidental power plant without detailed inventories of reactors and fuel assemblies and spent fuels. The advantage of this method is not so complicated but can be applied only based on one-time forward LPDM simulation with monitored concentrations around the power plant, in contrast to other inverse models. It was also found that continuously monitored radionuclides concentrations from possibly many sites located in all directions around the power plant are required to get accurate continuous emission rates from the accident power plant. The current methodology can also be used to verify the previous version of radionuclides emissions used among other modeling groups for the cases of intermittent or discontinuous samplings. Copyright © 2016. Published by Elsevier Ltd.
Modeling and estimating the jump risk of exchange rates: Applications to RMB
NASA Astrophysics Data System (ADS)
Wang, Yiming; Tong, Hanfei
2008-11-01
In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.
Ayyanat, Jayachandran A; Harbour, Catherine; Kumar, Sanjeev; Singh, Manjula
2018-01-05
Many interventions have attempted to increase vulnerable and remote populations' access to ORS and zinc to reduce child mortality from diarrhoea. However, the impact of these interventions is difficult to measure. From 2010 to 15, Micronutrient Initiative (MI), worked with the public sector in Bihar, India to enable community health workers to treat and report uncomplicated child diarrhoea with ORS and zinc. We describe how we estimated programme's impact on child mortality with Lives Saved Tool (LiST) modelling and data from MI's management information system (MIS). This study demonstrates that using LiST modelling and MIS data are viable options for evaluating programmes to reduce child mortality. We used MI's programme monitoring data to estimate coverage rates and LiST modelling software to estimate programme impact on child mortality. Four scenarios estimated the effects of different rates of programme scale-up and programme coverage on estimated child mortality by measuring children's lives saved. The programme saved an estimated 806-975 children under-5 who had diarrhoea during five-year project phase. Increasing ORS and zinc coverage rates to 19.8% & 18.3% respectively under public sector coverage with effective treatment would have increased the programme's impact on child mortality and could have achieved the project goal of saving 4200 children's lives during the five-year programme. Programme monitoring data can be used with LiST modelling software to estimate coverage rates and programme impact on child mortality. This modelling approach may cost less and yield estimates sooner than directly measuring programme impact with population-based surveys. However, users must be cautious about relying on modelled estimates of impact and ensure that the programme monitoring data used is complete and precise about the programme aspects that are modelled. Otherwise, LiST may mis-estimate impact on child mortality. Further, LiST software may require modifications to its built-in assumptions to capture programmatic inputs. LiST assumes that mortality rates and cause of death structure change only in response to changes in programme coverage. In Bihar, overall child mortality has decreased and diarrhoea seems to be less lethal than previously, but at present LiST does not adjust its estimates for these sorts of changes.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki
2017-07-01
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.
NASA Astrophysics Data System (ADS)
Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki
2017-07-01
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam
2015-07-01
Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
NASA Technical Reports Server (NTRS)
Mack, R. A.; Wylie, D. P.
1982-01-01
A technique was developed for estimating the condensation rates of convective storms using satellite measurements of cirrus anvil expansion rates and radiosonde measurements of environmental water vapor. Three cases of severe convection in Oklahoma were studied and a diagnostic model was developed for integrating radiosonde data with satellite data. Two methods were used to measure the anvil expansion rates - the expansion of isotherm contours on infrared images, and the divergent motions of small brightness anomalies tracked on the visible images. The differences between the two methods were large as the storms developed, but these differences became small in the latter stage of all three storms. A comparison between the three storms indicated that the available moisture in the lowest levels greatly affected the rain rates of the storms. This was evident from both the measured rain rates of the storms and the condensation rates estimated by the model. The possibility of using this diagnostic model for estimating the intensities of convective storms also is discussed.
NASA Astrophysics Data System (ADS)
Scharnagl, B.; Vrugt, J. A.; Vereecken, H.; Herbst, M.
2010-02-01
A major drawback of current soil organic carbon (SOC) models is that their conceptually defined pools do not necessarily correspond to measurable SOC fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states of the individual carbon pools. In this study, we tested the feasibility of inverse modelling for estimating pools in the Rothamsted carbon model (ROTHC) using mineralization rates observed during incubation experiments. This inverse approach may provide an alternative to existing SOC fractionation methods. To illustrate our approach, we used a time series of synthetically generated mineralization rates using the ROTHC model. We adopted a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to infer probability density functions of the various carbon pools at the start of incubation. The Kullback-Leibler divergence was used to quantify the information content of the mineralization rate data. Our results indicate that measured mineralization rates generally provided sufficient information to reliably estimate all carbon pools in the ROTHC model. The incubation time necessary to appropriately constrain all pools was about 900 days. The use of prior information on microbial biomass carbon significantly reduced the uncertainty of the initial carbon pools, decreasing the required incubation time to about 600 days. Simultaneous estimation of initial carbon pools and decomposition rate constants significantly increased the uncertainty of the carbon pools. This effect was most pronounced for the intermediate and slow pools. Altogether, our results demonstrate that it is particularly difficult to derive reasonable estimates of the humified organic matter pool and the inert organic matter pool from inverse modelling of mineralization rates observed during incubation experiments.
Pseudo-Linear Attitude Determination of Spinning Spacecraft
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2004-01-01
This paper presents the overall mathematical model and results from pseudo linear recursive estimators of attitude and rate for a spinning spacecraft. The measurements considered are vector measurements obtained by sun-sensors, fixed head star trackers, horizon sensors, and three axis magnetometers. Two filters are proposed for estimating the attitude as well as the angular rate vector. One filter, called the q-Filter, yields the attitude estimate as a quaternion estimate, and the other filter, called the D-Filter, yields the estimated direction cosine matrix. Because the spacecraft is gyro-less, Euler s equation of angular motion of rigid bodies is used to enable the estimation of the angular velocity. A simpler Markov model is suggested as a replacement for Euler's equation in the case where the vector measurements are obtained at high rates relative to the spacecraft angular rate. The performance of the two filters is examined using simulated data.
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
Garfield, R; Leu, C S
2000-06-01
Many reports on Iraq suggest that a rise in rates of death and disease have occurred since the Gulf War of January/February 1991 and the economic sanctions that followed it. Four preliminary models, based on unadjusted projections, were developed. A logistic regression model was then developed on the basis of six social variables in Iraq and comparable information from countries in the State of the World's Children report. Missing data were estimated for this model by a multiple imputation procedure. The final model depends on three socio-medical indicators: adult literacy, nutritional stunting of children under 5 years, and access to piped water. The model successfully predicted both the mortality rate in 1990, under stable conditions, and in 1991, following the Gulf War. For 1996, after 5 years of sanctions and prior to receipt of humanitarian food via the oil for food programme, this model shows mortality among children under 5 to have reached an estimated 87 per 1000, a rate last experienced more than 30 years ago. Accurate and timely estimates of mortality levels in developing countries are costly and require considerable methodological expertise. A rapid estimation technique like the one developed here may be a useful tool for quick and efficient estimation of mortality rates among under 5 year olds in countries where good mortality data are not routinely available. This is especially true for countries with complex humanitarian emergencies where information on mortality changes can guide interventions and the social stability to use standard demographic methods does not exist.
Estimating wildland fire rate of spread in a spatially nonuniform environment
Francis M Fujioka
1985-01-01
Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.
Estimation of death rates in US states with small subpopulations.
Voulgaraki, Anastasia; Wei, Rong; Kedem, Benjamin
2015-05-20
In US states with small subpopulations, the observed mortality rates are often zero, particularly among young ages. Because in life tables, death rates are reported mostly on a log scale, zero mortality rates are problematic. To overcome the observed zero death rates problem, appropriate probability models are used. Using these models, observed zero mortality rates are replaced by the corresponding expected values. This enables logarithmic transformations and, in some cases, the fitting of the eight-parameter Heligman-Pollard model to produce mortality estimates for ages 0-130 years, a procedure illustrated in terms of mortality data from several states. Copyright © 2014 John Wiley & Sons, Ltd.
Harris, Keith M; Thandrayen, Joanne; Samphoas, Chien; Se, Pros; Lewchalermwongse, Boontriga; Ratanashevorn, Rattanakorn; Perry, Megan L; Britts, Choloe
2016-04-01
This study tested a low-cost method for estimating suicide rates in developing nations that lack adequate statistics. Data comprised reported suicides from Cambodia's 2 largest newspapers. Capture-recapture modeling estimated a suicide rate of 3.8/100 000 (95% CI = 2.5-6.7) for 2012. That compares to World Health Organization estimates of 1.3 to 9.4/100 000 and a Cambodian government estimate of 3.5/100 000. Suicide rates of males were twice that of females, and rates of those <40 years were twice that of those ≥40 years. Capture-recapture modeling with newspaper reports proved a reasonable method for estimating suicide rates for countries with inadequate official data. These methods are low-cost and can be applied to regions with at least 2 newspapers with overlapping reports. Means to further improve this approach are discussed. These methods are applicable to both recent and historical data, which can benefit epidemiological work, and may also be applicable to homicides and other statistics. © 2016 APJPH.
Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George
2008-01-01
Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Screening level risk assessment model for chemical fate and effects in the environment.
Arnot, Jon A; Mackay, Don; Webster, Eva; Southwood, Jeanette M
2006-04-01
A screening level risk assessment model is developed and described to assess and prioritize chemicals by estimating environmental fate and transport, bioaccumulation, and exposure to humans and wildlife for a unit emission rate. The most sensitive risk endpoint is identified and a critical emission rate is then calculated as a result of that endpoint being reached. Finally, this estimated critical emission rate is compared with the estimated actual emission rate as a risk assessment factor. This "back-tracking" process avoids the use of highly uncertain emission rate data as model input. The application of the model is demonstrated in detail for three diverse chemicals and in less detail for a group of 70 chemicals drawn from the Canadian Domestic Substances List. The simple Level II and the more complex Level III fate calculations are used to "bin" substances into categories of similar probable risk. The essential role of the model is to synthesize information on chemical and environmental properties within a consistent mass balance framework to yield an overall estimate of screening level risk with respect to the defined endpoint. The approach may be useful to identify and prioritize those chemicals of commerce that are of greatest potential concern and require more comprehensive modeling and monitoring evaluations in actual regional environments and food webs.
Estimating HIV incidence and detection rates from surveillance data.
Posner, Stephanie J; Myers, Leann; Hassig, Susan E; Rice, Janet C; Kissinger, Patricia; Farley, Thomas A
2004-03-01
Markov models that incorporate HIV test information can increase precision in estimates of new infections and permit the estimation of detection rates. The purpose of this study was to assess the functioning of a Markov model for estimating new HIV infections and HIV detection rates in Louisiana using surveillance data. We expanded a discrete-time Markov model by accounting for the change in AIDS case definition made by the Centers for Disease Control and Prevention in 1993. The model was applied to quarterly HIV/AIDS surveillance data reported in Louisiana from 1981 to 1996 for various exposure and demographic subgroups. When modeling subgroups defined by exposure categories, we adjusted for the high proportion of missing exposure information among recent cases. We ascertained sensitivity to changes in various model assumptions. The model was able to produce results consistent with other sources of information in the state. Estimates of new infections indicated a transition of the HIV epidemic in Louisiana from (1) predominantly white men and men who have sex with men to (2) women, blacks, and high-risk heterosexuals. The model estimated that 61% of all HIV/AIDS cases were detected and reported by 1996, yet half of all HIV/non-AIDS cases were yet to be detected. Sensitivity analyses demonstrated that the model was robust to several uncertainties. In general, the methodology provided a useful and flexible alternative for estimating infection and detection trends using data from a U.S. surveillance program. Its use for estimating current infection will need further exploration to address assumptions related to newer treatments.
Park, Taeyoung; Krafty, Robert T; Sánchez, Alvaro I
2012-07-27
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2004-01-01
Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.
Estimating residual fault hitting rates by recapture sampling
NASA Technical Reports Server (NTRS)
Lee, Larry; Gupta, Rajan
1988-01-01
For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
Earthquake Potential Models for China
NASA Astrophysics Data System (ADS)
Rong, Y.; Jackson, D. D.
2002-12-01
We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.
Jung, Dae Ho; Lee, Joon Woo; Kang, Woo Hyun; Hwang, In Ha; Son, Jung Eek
2018-01-04
Photosynthesis is an important physiological response for determination of CO₂ fertilization in greenhouses and estimation of crop growth. In order to estimate the whole plant photosynthetic rate, it is necessary to investigate how light interception by crops changes with environmental and morphological factors. The objectives of this study were to analyze plant light interception using a three-dimensional (3D) plant model and ray-tracing, determine the spatial distribution of the photosynthetic rate, and estimate the whole plant photosynthetic rate of Irwin mango ( Mangifera indica L. cv. Irwin) grown in greenhouses. In the case of mangoes, it is difficult to measure actual light interception at the canopy level due to their vase shape. A two-year-old Irwin mango tree was used to measure the whole plant photosynthetic rate. Light interception and whole plant photosynthetic rate were measured under artificial and natural light conditions using a closed chamber (1 × 1 × 2 m). A 3D plant model was constructed and ray-tracing simulation was conducted for calculating the photosynthetic rate with a two-variable leaf photosynthetic rate model of the plant. Under artificial light, the estimated photosynthetic rate increased from 2.0 to 2.9 μmolCO₂·m -2 ·s -1 with increasing CO₂ concentration. On the other hand, under natural light, the photosynthetic rate increased from 0.2 μmolCO₂·m -2 ·s -1 at 06:00 to a maximum of 7.3 μmolCO₂·m -2 ·s -1 at 09:00, then gradually decreased to -1.0 μmolCO₂·m -2 ·s -1 at 18:00. In validation, simulation results showed good agreement with measured results with R ² = 0.79 and RMSE = 0.263. The results suggest that this method could accurately estimate the whole plant photosynthetic rate and be useful for pruning and adequate CO₂ fertilization.
Hone, J.; Pech, R.; Yip, P.
1992-01-01
Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476
The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake
NASA Technical Reports Server (NTRS)
Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.
1986-01-01
The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.
NASA Astrophysics Data System (ADS)
Heinlein, S. N.
2013-12-01
Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2012-01-01
The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20–549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. PMID:22487046
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
NaCl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates.
Zimmermann, Nils E R; Vorselaars, Bart; Espinosa, Jorge R; Quigley, David; Smith, William R; Sanz, Eduardo; Vega, Carlos; Peters, Baron
2018-06-14
This work reexamines seeded simulation results for NaCl nucleation from a supersaturated aqueous solution at 298.15 K and 1 bar pressure. We present a linear regression approach for analyzing seeded simulation data that provides both nucleation rates and uncertainty estimates. Our results show that rates obtained from seeded simulations rely critically on a precise driving force for the model system. The driving force vs. solute concentration curve need not exactly reproduce that of the real system, but it should accurately describe the thermodynamic properties of the model system. We also show that rate estimates depend strongly on the nucleus size metric. We show that the rate estimates systematically increase as more stringent local order parameters are used to count members of a cluster and provide tentative suggestions for appropriate clustering criteria.
Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80
NASA Astrophysics Data System (ADS)
Pruet, Jason; Fuller, George M.
2003-11-01
We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.
Modelling of seasonal influenza and estimation of the burden in Tunisia.
Chlif, S; Aissi, W; Bettaieb, J; Kharroubi, G; Nouira, M; Yazidi, R; El Moussi, A; Maazaoui, L; Slim, A; Salah, A Ben
2016-10-02
The burden of influenza was estimated from surveillance data in Tunisia using epidemiological parameters of transmission with WHO classical tools and mathematical modelling. The incidence rates of influenza-associated influenza-like illness (ILI) per 100 000 were 18 735 in 2012/2013 season; 5536 in 2013/14 and 12 602 in 2014/15. The estimated proportions of influenza-associated ILI in the total outpatient load were 3.16%; 0.86% and 1.98% in the 3 seasons respectively. Distribution of influenza viruses among positive patients was: A(H3N2) 15.5%; A(H1N1)pdm2009 39.2%; and B virus 45.3% in 2014/2015 season. From the estimated numbers of symptomatic cases, we estimated that the critical proportions of the population that should be vaccinated were 15%, 4% and 10% respectively. Running the model for the different values of R0, we quantified the number of symptomatic clinical cases, the clinical attack rates, the symptomatic clinical attack rates and the number of deaths. More realistic versions of this model and improved estimates of parameters from surveillance data will strengthen the estimation of the burden of influenza.
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?
Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A
2015-06-01
Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.
An estimate for the thermal photon rate from lattice QCD
NASA Astrophysics Data System (ADS)
Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman
2018-03-01
We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyldesley, Scott, E-mail: styldesl@bccancer.bc.c; Delaney, Geoff; Foroudi, Farshad
Purpose: Estimates of the need for radiotherapy (RT) using different methods (criterion based benchmarking [CBB]and the Canadian [C-EBEST]and Australian [A-EBEST]epidemiologically based estimates) exist for various cancer sites. We compared these model estimates to actual RT rates for lung, breast, and prostate cancers in British Columbia (BC). Methods and Materials: All cases of lung, breast, and prostate cancers in BC from 1997 to 2004 and all patients receiving RT within 1 year (RT{sub 1Y}) and within 5 years (RT{sub 5Y}) of diagnosis were identified. The RT{sub 1Y} and RT{sub 5Y} proportions in health regions with a cancer center for the mostmore » recent year were then calculated. RT rates were compared with CBB and EBEST estimates of RT needs. Variation was assessed by time and region. Results: The RT{sub 1Y} in regions with a cancer center for lung, breast, and prostate cancers were 51%, 58%, and 33% compared with 45%, 57%, and 32% for C-EBEST and 41%, 61%, and 37% for CBB models. The RT{sub 5Y} rates in regions with a cancer center for lung, breast, and prostate cancers were 59%, 61%, and 40% compared with 61%, 66%, and 61% for C-EBEST and 75%, 83%, and 60% for A-EBEST models. The RT{sub 1Y} rates increased for breast and prostate cancers. Conclusions: C-EBEST and CBB model estimates are closer to the actual RT rates than the A-EBEST estimates. Application of these model estimates by health care decision makers should be undertaken with an understanding of the methods used and the assumptions on which they were based.« less
A New Model for the Estimation of Cell Proliferation Dynamics Using CFSE Data
2011-08-20
cells, and hence into the resulting division and death rates . Alternatively, we propose that there is information to be learned not only from...meaningful estimation of population proliferation and death rates in a manner which is unbiased and mechanistically sound. Significantly, this new model is...change in permitting the dependence of the proliferation and death rates (α and β) and the label loss rate (v) on both time t and measured FI x. This
NASA Astrophysics Data System (ADS)
Norton, Andrew S.
An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.
Deslauriers, David; Rosburg, Alex J.; Chipps, Steven R.
2017-01-01
We developed a foraging model for young fishes that incorporates handling and digestion rate to estimate daily food consumption. Feeding trials were used to quantify functional feeding response, satiation, and gut evacuation rate. Once parameterized, the foraging model was then applied to evaluate effects of prey type, prey density, water temperature, and fish size on daily feeding rate by age-0 (19–70 mm) pallid sturgeon (Scaphirhynchus albus). Prey consumption was positively related to prey density (for fish >30 mm) and water temperature, but negatively related to prey size and the presence of sand substrate. Model evaluation results revealed good agreement between observed estimates of daily consumption and those predicted by the model (r2 = 0.95). Model simulations showed that fish feeding on Chironomidae or Ephemeroptera larvae were able to gain mass, whereas fish feeding solely on zooplankton lost mass under most conditions. By accounting for satiation and digestive processes in addition to handling time and prey density, the model provides realistic estimates of daily food consumption that can prove useful for evaluating rearing conditions for age-0 fishes.
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
2007-09-01
In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.
Detecting aseismic strain transients from seismicity data
Llenos, A.L.; McGuire, J.J.
2011-01-01
Aseismic deformation transients such as fluid flow, magma migration, and slow slip can trigger changes in seismicity rate. We present a method that can detect these seismicity rate variations and utilize these anomalies to constrain the underlying variations in stressing rate. Because ordinary aftershock sequences often obscure changes in the background seismicity caused by aseismic processes, we combine the stochastic Epidemic Type Aftershock Sequence model that describes aftershock sequences well and the physically based rate- and state-dependent friction seismicity model into a single seismicity rate model that models both aftershock activity and changes in background seismicity rate. We implement this model into a data assimilation algorithm that inverts seismicity catalogs to estimate space-time variations in stressing rate. We evaluate the method using a synthetic catalog, and then apply it to a catalog of M???1.5 events that occurred in the Salton Trough from 1990 to 2009. We validate our stressing rate estimates by comparing them to estimates from a geodetically derived slip model for a large creep event on the Obsidian Buttes fault. The results demonstrate that our approach can identify large aseismic deformation transients in a multidecade long earthquake catalog and roughly constrain the absolute magnitude of the stressing rate transients. Our method can therefore provide a way to detect aseismic transients in regions where geodetic resolution in space or time is poor. Copyright 2011 by the American Geophysical Union.
Robel, G.L.; Fisher, W.L.
1999-01-01
Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
[The health gap in Mexico, measured through child mortality].
Gutiérrez, Juan Pablo; Bertozzi, Stefano M
2003-01-01
To estimate the health gap in Mexico, as evidenced by the difference between the observed 1998 mortality rate and the estimated rate and the estimated rate for the same year according to social and economic indicators, with rates from other countries. An econometric model was developed, using the 1998 child mortality rate (CMR) as the dependent variable, and macro-social and economic indicators as independent variables. The model included 70 countries for which complete data were available. The proposed model explained over 90% of the variability in CMR among countries. The expected CMR for Mexico was 22% lower that the observed rate, which represented nearly 20,000 excess deaths. After adjusting for differences in productivity, distribution of wealth, and investment in human capital, the excess child mortality rate suggested efficiency problems in the Mexican health system, at least in relation to services intended to reduce child mortality. The English version of this paper is available at: http://www.insp.mx/salud/index.html.
NASA Astrophysics Data System (ADS)
Srinivas, Vikram; Menon, Sandeep; Osterman, Michael; Pecht, Michael G.
2013-08-01
Solder durability models frequently focus on the applied strain range; however, the rate of applied loading, or strain rate, is also important. In this study, an approach to incorporate strain rate dependency into durability estimation for solder interconnects is examined. Failure data were collected for SAC105 solder ball grid arrays assembled with SAC305 solder that were subjected to displacement-controlled torsion loads. Strain-rate-dependent (Johnson-Cook model) and strain-rate-independent elastic-plastic properties were used to model the solders in finite-element simulation. Test data were then used to extract damage model constants for the reduced-Ag SAC solder. A generalized Coffin-Manson damage model was used to estimate the durability. The mechanical fatigue durability curve for reduced-silver SAC solder was generated and compared with durability curves for SAC305 and Sn-Pb from the literature.
Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures
Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.
2016-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038
Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon
2017-01-01
[Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765
Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok
2017-09-30
The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition
Using population models to evaluate management alternatives for Gulf Striped Bass
Aspinwall, Alexander P.; Irwin, Elise R.; Lloyd, M. Clint
2017-01-01
Interstate management of Gulf Striped Bass Morone saxatilis has involved a thirty-year cooperative effort involving Federal and State agencies in Georgia, Florida and Alabama (Apalachicola-Chattahoochee-Flint Gulf Striped Bass Technical Committee). The Committee has recently focused on developing an adaptive framework for conserving and restoring Gulf Striped Bass in the Apalachicola, Chattahoochee, and Flint River (ACF) system. To evaluate the consequences and tradeoffs among management activities, population models were used to inform management decisions. Stochastic matrix models were constructed with varying recruitment and stocking rates to simulate effects of management alternatives on Gulf Striped Bass population objectives. An age-classified matrix model that incorporated stock fecundity estimates and survival estimates was used to project population growth rate. In addition, combinations of management alternatives (stocking rates, Hydrilla control, harvest regulations) were evaluated with respect to how they influenced Gulf Striped Bass population growth. Annual survival and mortality rates were estimated from catch-curve analysis, while fecundity was estimated and predicted using a linear least squares regression analysis of fish length versus egg number from hatchery brood fish data. Stocking rates and stocked-fish survival rates were estimated from census data. Results indicated that management alternatives could be an effective approach to increasing the Gulf Striped Bass population. Population abundance was greatest under maximum stocking effort, maximum Hydrilla control and a moratorium. Conversely, population abundance was lowest under no stocking, no Hydrilla control and the current harvest regulation. Stocking rates proved to be an effective management strategy; however, low survival estimates of stocked fish (1%) limited the potential for population growth. Hydrilla control increased the survival rate of stocked fish and provided higher estimates of population abundances than maximizing the stocking rate. A change in the current harvest regulation (50% harvest regulation) was not an effective alternative to increasing the Gulf Striped Bass population size. Applying a moratorium to the Gulf Striped Bass fishery increased survival rates from 50% to 74% and resulted in the largest population growth of the individual management alternatives. These results could be used by the Committee to inform management decisions for other populations of Striped Bass in the Gulf Region.
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
A Bayesian framework to estimate diversification rates and their variation through time and space
2011-01-01
Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae) and Lupinus (Fabaceae). In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling. PMID:22013891
ERIC Educational Resources Information Center
Kessler, Lawrence M.
2013-01-01
In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…
Bates, Jonathan; Parzynski, Craig S; Dhruva, Sanket S; Coppi, Andreas; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Shaw, Richard E; Warner, Frederick; Krumholz, Harlan M; Ross, Joseph S
2018-06-12
To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates. Copyright © 2018 John Wiley & Sons, Ltd.
Robust estimation of simulated urinary volume from camera images under bathroom illumination.
Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji
2016-08-01
General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.
A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.
NASA Astrophysics Data System (ADS)
Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.
2017-12-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2012-04-01
The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20-549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study
2015-11-05
used to develop more detailed predictive risk models as well as strategies for preventing specific types of MVCs. Systematic Review of Evidence... used to estimate rates of accident-related injuries more generally,9 but not with specific reference to MVCs. For the present report, rates of...precise rate estimates based on person-years rather than active duty strength, (e) multivariable effects of specific risk /protective factors after
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
NASA Astrophysics Data System (ADS)
Oakley, David O. S.; Fisher, Donald M.; Gardner, Thomas W.; Stewart, Mary Kate
2018-01-01
Marine terraces on growing fault-propagation folds provide valuable insight into the relationship between fold kinematics and uplift rates, providing a means to distinguish among otherwise non-unique kinematic model solutions. Here, we investigate this relationship at two locations in North Canterbury, New Zealand: the Kate anticline and Haumuri Bluff, at the northern end of the Hawkswood anticline. At both locations, we calculate uplift rates of previously dated marine terraces, using DGPS surveys to estimate terrace inner edge elevations. We then use Markov chain Monte Carlo methods to fit fault-propagation fold kinematic models to structural geologic data, and we incorporate marine terrace uplift into the models as an additional constraint. At Haumuri Bluff, we find that marine terraces, when restored to originally horizontal surfaces, can help to eliminate certain trishear models that would fit the geologic data alone. At Kate anticline, we compare uplift rates at different structural positions and find that the spatial pattern of uplift rates is more consistent with trishear than with a parallel-fault propagation fold kink-band model. Finally, we use our model results to compute new estimates for fault slip rates ( 1-2 m/ka at Kate anticline and 1-4 m/ka at Haumuri Bluff) and ages of the folds ( 1 Ma), which are consistent with previous estimates for the onset of folding in this region. These results are consistent with previous work on the age of onset of folding in this region, provide revised estimates of fault slip rates necessary to understand the seismic hazard posed by these faults, and demonstrate the value of incorporating marine terraces in inverse fold kinematic models as a means to distinguish among non-unique solutions.
Estimation of Rainfall Rates from Passive Microwave Remote Sensing.
NASA Astrophysics Data System (ADS)
Sharma, Awdhesh Kumar
Rainfall rates have been estimated using the passive microwave and visible/infrared remote sensing techniques. Data of September 14, 1978 from the Scanning Multichannel Microwave Radiometer (SMMR) on board SEA SAT-A and the Visible and Infrared Spin Scan Radiometer (VISSR) on board GOES-W (Geostationary Operational Environmental Satellite - West) was obtained and analyzed for rainfall rate retrieval. Microwave brightness temperatures (MBT) are simulated, using the microwave radiative transfer model (MRTM) and atmospheric scattering models. These MBT were computed as a function of rates of rainfall from precipitating clouds which are in a combined phase of ice and water. Microwave extinction due to ice and liquid water are calculated using Mie-theory and Gamma drop size distributions. Microwave absorption due to oxygen and water vapor are based on the schemes given by Rosenkranz, and Barret and Chung. The scattering phase matrix involved in the MRTM is found using Eddington's two stream approximation. The surface effects due to winds and foam are included through the ocean surface emissivity model. Rainfall rates are then inverted from MBT using the optimization technique "Leaps and Bounds" and multiple linear regression leading to a relationship between the rainfall rates and MBT. This relationship has been used to infer the oceanic rainfall rates from SMMR data. The VISSR data has been inverted for the rainfall rates using Griffith's scheme. This scheme provides an independent means of estimating rainfall rates for cross checking SMMR estimates. The inferred rainfall rates from both techniques have been plotted on a world map for comparison. A reasonably good correlation has been obtained between the two estimates.
Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.
2010-01-01
Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049
Elimination Rates of Dioxin Congeners in Former Chlorophenol Workers from Midland, Michigan
Collins, James J.; Bodner, Kenneth M.; Wilken, Michael; Bodnar, Catherine M.
2012-01-01
Background: Exposure reconstructions and risk assessments for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and other dioxins rely on estimates of elimination rates. Limited data are available on elimination rates for congeners other than TCDD. Objectives: We estimated apparent elimination rates using a simple first-order one-compartment model for selected dioxin congeners based on repeated blood sampling in a previously studied population. Methods: Blood samples collected from 56 former chlorophenol workers in 2004–2005 and again in 2010 were analyzed for dioxin congeners. We calculated the apparent elimination half-life in each individual for each dioxin congener and examined factors potentially influencing elimination rates and the impact of estimated ongoing background exposures on rate estimates. Results: Mean concentrations of all dioxin congeners in the sampled participants declined between sampling times. Median apparent half-lives of elimination based on changes in estimated mass in the body were generally consistent with previous estimates and ranged from 6.8 years (1,2,3,7,8,9-hexachlorodibenzo-p-dioxin) to 11.6 years (pentachlorodibenzo-p-dioxin), with a composite half-life of 9.3 years for TCDD toxic equivalents. None of the factors examined, including age, smoking status, body mass index or change in body mass index, initial measured concentration, or chloracne diagnosis, was consistently associated with the estimated elimination rates in this population. Inclusion of plausible estimates of ongoing background exposures decreased apparent half-lives by approximately 10%. Available concentration-dependent toxicokinetic models for TCDD underpredicted observed elimination rates for concentrations < 100 ppt. Conclusions: The estimated elimination rates from this relatively large serial sampling study can inform occupational and environmental exposure and serum evaluations for dioxin compounds. PMID:23063871
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Geodetic estimates of fault slip rates in the San Francisco Bay area
Savage, J.C.; Svarc, J.L.; Prescott, W.H.
1999-01-01
Bourne et al. [1998] have suggested that the interseismic velocity profile at the surface across a transform plate boundary is a replica of the secular velocity profile at depth in the plastosphere. On the other hand, in the viscoelastic coupling model the shape of the interseismic surface velocity profile is a consequence of plastosphere relaxation following the previous rupture of the faults that make up the plate boundary and is not directly related to the secular flow in the plastosphere. The two models appear to be incompatible. If the plate boundary is composed of several subparallel faults and the interseismic surface velocity profile across the boundary known, each model predicts the secular slip rates on the faults which make up the boundary. As suggested by Bourne et al., the models can then be tested by comparing the predicted secular slip rates to those estimated from long-term offsets inferred from geology. Here we apply that test to the secular slip rates predicted for the principal faults (San Andreas, San Gregorio, Hayward, Calaveras, Rodgers Creek, Green Valley and Greenville faults) in the San Andreas fault system in the San Francisco Bay area. The estimates from the two models generally agree with one another and to a lesser extent with the geologic estimate. Because the viscoelastic coupling model has been equally successful in estimating secular slip rates on the various fault strands at a diffuse plate boundary, the success of the model of Bourne et al. [1998] in doing the same thing should not be taken as proof that the interseismic velocity profile across the plate boundary at the surface is a replica of the velocity profile at depth in the plastosphere.
Hu, Jia; Moore, David J P; Riveros-Iregui, Diego A; Burns, Sean P; Monson, Russell K
2010-03-01
*Understanding controls over plant-atmosphere CO(2) exchange is important for quantifying carbon budgets across a range of spatial and temporal scales. In this study, we used a simple approach to estimate whole-tree CO(2) assimilation rate (A(Tree)) in a subalpine forest ecosystem. *We analysed the carbon isotope ratio (delta(13)C) of extracted needle sugars and combined it with the daytime leaf-to-air vapor pressure deficit to estimate tree water-use efficiency (WUE). The estimated WUE was then combined with observations of tree transpiration rate (E) using sap flow techniques to estimate A(Tree). Estimates of A(Tree) for the three dominant tree species in the forest were combined with species distribution and tree size to estimate and gross primary productivity (GPP) using an ecosystem process model. *A sensitivity analysis showed that estimates of A(Tree) were more sensitive to dynamics in E than delta(13)C. At the ecosystem scale, the abundance of lodgepole pine trees influenced seasonal dynamics in GPP considerably more than Engelmann spruce and subalpine fir because of its greater sensitivity of E to seasonal climate variation. *The results provide the framework for a nondestructive method for estimating whole-tree carbon assimilation rate and ecosystem GPP over daily-to weekly time scales.
Estimating rates of local species extinction, colonization and turnover in animal communities
Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.
1998-01-01
Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Ro, Kyoung S; Szogi, Ariel A; Moore, Philip A
2018-05-12
In-house windrowing between flocks is an emerging sanitary management practice to partially disinfect the built-up litter in broiler houses. However, this practice may also increase ammonia (NH 3 ) emission from the litter due to the increase in litter temperature. The objectives of this study were to develop mathematical models to estimate NH 3 emission rates from broiler houses practicing in-house windrowing between flocks. Equations to estimate mass-transfer areas form different shapes windrowed litter (triangular, rectangular, and semi-cylindrical prisms) were developed. Using these equations, the heights of windrows yielding the smallest mass-transfer area were estimated. Smaller mass-transfer area is preferred as it reduces both emission rates and heat loss. The heights yielding the minimum mass-transfer area were 0.8 and 0.5 m for triangular and rectangular windrows, respectively. Only one height (0.6 m) was theoretically possible for semi-cylindrical windrows because the base and the height were not independent. Mass-transfer areas were integrated with published process-based mathematical models to estimate the total house NH 3 emission rates during in-house windrowing of poultry litter. The NH 3 emission rate change calculated from the integrated model compared well with the observed values except for the very high NH 3 initial emission rate from mechanically disturbing the litter to form the windrows. This approach can be used to conveniently estimate broiler house NH 3 emission rates during in-house windrowing between flocks by simply measuring litter temperatures.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
NASA Astrophysics Data System (ADS)
Derieppe, M.; Bos, C.; de Greef, M.; Moonen, C.; de Senneville, B. Denis
2016-01-01
We have previously demonstrated the feasibility of monitoring ultrasound-mediated uptake of a hydrophilic model drug in real time with dynamic confocal fluorescence microscopy. In this study, we evaluate and correct the impact of photobleaching to improve the accuracy of pharmacokinetic parameter estimates. To model photobleaching of the fluorescent model drug SYTOX Green, a photobleaching process was added to the current two-compartment model describing cell uptake. After collection of the uptake profile, a second acquisition was performed when SYTOX Green was equilibrated, to evaluate the photobleaching rate experimentally. Photobleaching rates up to 5.0 10-3 s-1 were measured when applying power densities up to 0.2 W.cm-2. By applying the three-compartment model, the model drug uptake rate of 6.0 10-3 s-1 was measured independent of the applied laser power. The impact of photobleaching on uptake rate estimates measured by dynamic fluorescence microscopy was evaluated. Subsequent compensation improved the accuracy of pharmacokinetic parameter estimates in the cell population subjected to sonopermeabilization.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies
NASA Astrophysics Data System (ADS)
Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba
2018-05-01
photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Reboussin, Beth A.; Ialongo, Nicholas S.
2011-01-01
Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139
Kerber, Kate J.; Lawn, Joy E.; Johnson, Leigh F.; Mahy, Mary; Dorrington, Rob E.; Phillips, Heston; Bradshaw, Debbie; Nannan, Nadine; Msemburi, William; Oestergaard, Mikkel Z.; Walker, Neff P.; Sanders, David; Jackson, Debra
2013-01-01
Objective: To analyse trends in under-five mortality rate in South Africa (1990–2011), particularly the contribution of AIDS deaths. Methods: Three nationally used models for estimating AIDS deaths in children were systematically reviewed. The model outputs were compared with under-five mortality rate estimates for South Africa from two global estimation models. All estimates were compared with available empirical data. Results: Differences between the models resulted in varying point estimates for under-five mortality but the trends were similar, with mortality increasing to a peak around 2005. The three models showing the contribution of AIDS suggest a maximum of 37–39% of child deaths were due to AIDS in 2004–2005 which has since declined. Although the rate of progress from 1990 is not the 4.4% needed to meet Millennium Development Goal 4 for child survival, South Africa's average annual rate of under-five mortality decline between 2006 and 2011 was between 6.3 and 10.2%. Conclusion: In 2005, South Africa was one of only four countries globally with an under-five mortality rate higher than the 1990 Millennium Development Goal baseline. Over the past 5 years, the country has achieved a rate of child mortality reduction exceeded by only three other countries. This rapid turnaround is likely due to scale-up of prevention of mother-to-child transmission of HIV, and to a lesser degree, the expanded roll-out of antiretroviral therapy. Emphasis on these programmes must continue, but failure to address other aspects of care including integrated high-quality maternal and neonatal care means that the decline in child mortality could stall. PMID:23863402
Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model
Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David
2012-01-01
The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Dishman, W. K.
1982-01-01
A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
Estimating the annual number of strokes and the issue of imperfect data: an example from Australia.
Cadilhac, Dominique A; Vos, Theo; Thrift, Amanda G
2014-01-01
Estimates of strokes in Australia are typically obtained using 1996-1997 age-specific attack rates from the pilot North East Melbourne Stroke Incidence (NEMESIS) Study (eight postcode regions). Declining hospitalizations for stroke indicate the potential to overestimate cases. To illustrate how current methods may potentially overestimate the number of strokes in Australia. Hospital separations data (primary discharge ICD10 codes I60 to I64) and three stroke projection models were compared. Each model had age- and gender-specific attack rates from the NEMESIS study applied to the 2003 population. One model used the 2003 Burden of Disease approach where the ratio of the 1996-1997 NEMESIS study incidence to hospital separation rate in the same year was adjusted by the 2002/2003 hospital separation rate within the same geographic region using relevant ICD-primary diagnosis codes. Hospital separations data were inflated by 12·1% to account for nonhospitalized stroke, while the Burden of Disease model was inflated by 27·6% to account for recurrent stroke events in that year. The third model used 1997-1999 attack rates from the larger 22-postcode NEMESIS study region. In 2003, Australian hospitalizations for stroke (I60 to I64) were 33,022, and extrapolation to all stroke (hospitalized and nonhospitalized) was 37,568. Applying NEMESIS study attack rates to the 2003 Australian population, 50,731 strokes were projected. Fewer cases for 2003 were estimated with the Burden of Disease model (28,364) and 22-postcode NEMESIS study rates (41,332). Estimating the number of strokes in a country can be highly variable depending on the recency of data, the type of data available, and the methods used. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.
A Bayesian estimation of a stochastic predator-prey model of economic fluctuations
NASA Astrophysics Data System (ADS)
Dibeh, Ghassan; Luchinsky, Dmitry G.; Luchinskaya, Daria D.; Smelyanskiy, Vadim N.
2007-06-01
In this paper, we develop a Bayesian framework for the empirical estimation of the parameters of one of the best known nonlinear models of the business cycle: The Marx-inspired model of a growth cycle introduced by R. M. Goodwin. The model predicts a series of closed cycles representing the dynamics of labor's share and the employment rate in the capitalist economy. The Bayesian framework is used to empirically estimate a modified Goodwin model. The original model is extended in two ways. First, we allow for exogenous periodic variations of the otherwise steady growth rates of the labor force and productivity per worker. Second, we allow for stochastic variations of those parameters. The resultant modified Goodwin model is a stochastic predator-prey model with periodic forcing. The model is then estimated using a newly developed Bayesian estimation method on data sets representing growth cycles in France and Italy during the years 1960-2005. Results show that inference of the parameters of the stochastic Goodwin model can be achieved. The comparison of the dynamics of the Goodwin model with the inferred values of parameters demonstrates quantitative agreement with the growth cycle empirical data.
A Simultaneous Equation Demand Model for Block Rates
NASA Astrophysics Data System (ADS)
Agthe, Donald E.; Billings, R. Bruce; Dobra, John L.; Raffiee, Kambiz
1986-01-01
This paper examines the problem of simultaneous-equations bias in estimation of the water demand function under an increasing block rate structure. The Hausman specification test is used to detect the presence of simultaneous-equations bias arising from correlation of the price measures with the regression error term in the results of a previously published study of water demand in Tucson, Arizona. An alternative simultaneous equation model is proposed for estimating the elasticity of demand in the presence of block rate pricing structures and availability of service charges. This model is used to reestimate the price and rate premium elasticities of demand in Tucson, Arizona for both the usual long-run static model and for a simple short-run demand model. The results from these simultaneous equation models are consistent with a priori expectations and are unbiased.
Respiratory rate estimation during triage of children in hospitals.
Shah, Syed Ahmar; Fleming, Susannah; Thompson, Matthew; Tarassenko, Lionel
2015-01-01
Accurate assessment of a child's health is critical for appropriate allocation of medical resources and timely delivery of healthcare in Emergency Departments. The accurate measurement of vital signs is a key step in the determination of the severity of illness and respiratory rate is currently the most difficult vital sign to measure accurately. Several previous studies have attempted to extract respiratory rate from photoplethysmogram (PPG) recordings. However, the majority have been conducted in controlled settings using PPG recordings from healthy subjects. In many studies, manual selection of clean sections of PPG recordings was undertaken before assessing the accuracy of the signal processing algorithms developed. Such selection procedures are not appropriate in clinical settings. A major limitation of AR modelling, previously applied to respiratory rate estimation, is an appropriate selection of model order. This study developed a novel algorithm that automatically estimates respiratory rate from a median spectrum constructed applying multiple AR models to processed PPG segments acquired with pulse oximetry using a finger probe. Good-quality sections were identified using a dynamic template-matching technique to assess PPG signal quality. The algorithm was validated on 205 children presenting to the Emergency Department at the John Radcliffe Hospital, Oxford, UK, with reference respiratory rates up to 50 breaths per minute estimated by paediatric nurses. At the time of writing, the authors are not aware of any other study that has validated respiratory rate estimation using data collected from over 200 children in hospitals during routine triage.
Age-specific survival estimates of King Eiders derived from satellite telemetry
Oppel, Steffen; Powell, Abby N.
2010-01-01
Age- and sex-specific survival and dispersal are important components in the dynamics and genetic structure of bird populations. For many avian taxa survival rates at the adult and juvenile life stages differ, but in long-lived species juveniles' survival is logistically challenging to study. We present the first estimates of hatch-year annual survival rates for a sea duck, the King Eider (Somateria spectabilis), estimated from satellite telemetry. From 2006 to 2008 we equipped pre-fiedging King Eiders with satellite transmitters on breeding grounds in Alaska and estimated annual survival rates during their first 2 years of life with known-fate models. We compared those estimates to survival rates of adults marked in the same area from 2002 to 2008. Hatch-year survival varied by season during the first year of life, and model-averaged annual survival rate was 0.67 (95% CI: 0.48–0.80). We did not record any mortality during the second year and were therefore unable to estimate second-year survival rate. Adults' survival rate was constant through the year (0.94, 95% CI: 0.86–0.97). No birds appeared to breed during their second summer. While 88% of females with an active transmitter (n = 9) returned to their natal area at the age of 2 years, none of the 2-year old males (n = 3) did. This pattern indicates that females' natal philopatry is high and suggests that males' higher rates of dispersal may account for sex-specific differences in apparent survival rates of juvenile sea ducks when estimated with mark—recapture methods.
Heart rate prediction for coronary artery disease patients (CAD): Results of a clinical pilot study.
Müller-von Aschwege, Frerk; Workowski, Anke; Willemsen, Detlev; Müller, Sebastian M; Hein, Andreas
2015-01-01
This paper describes the results of a pilot study with cardiac patients based on information that can be derived from a smartphone. The idea behind the study is to design a model for estimating the heart rate of a patient before an outdoor walking session for track planning, as well as using the model for guidance during an outdoor session. The model allows estimation of the heart rate several minutes in advance to guide the patient and avoid overstrain before its occurrence. This paper describes the first results of the clinical pilot study with cardiac patients taking β-blockers. 9 patients have been tested on a treadmill and during three outdoor sessions each. The results have been derived and three levels of improvement have been tested by cross validation. The overall result is an average Median Absolute Deviation (MAD) of 4.26 BPM between measured heart rate and smartphone sensor based model estimation.
Introduction to State Estimation of High-Rate System Dynamics.
Hong, Jonathan; Laflamme, Simon; Dodson, Jacob; Joyce, Bryan
2018-01-13
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer's convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model.
NASA Astrophysics Data System (ADS)
Kompany-Zareh, Mohsen; Khoshkam, Maryam
2013-02-01
This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-05-18
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.
Battaile, Brian C; Trites, Andrew W
2013-01-01
We propose a method to model the physiological link between somatic survival and reproductive output that reduces the number of parameters that need to be estimated by models designed to determine combinations of birth and death rates that produce historic counts of animal populations. We applied our Reproduction and Somatic Survival Linked (RSSL) method to the population counts of three species of North Pacific pinnipeds (harbor seals, Phoca vitulina richardii (Gray, 1864); northern fur seals, Callorhinus ursinus (L., 1758); and Steller sea lions, Eumetopias jubatus (Schreber, 1776))--and found our model outperformed traditional models when fitting vital rates to common types of limited datasets, such as those from counts of pups and adults. However, our model did not perform as well when these basic counts of animals were augmented with additional observations of ratios of juveniles to total non-pups. In this case, the failure of the ratios to improve model performance may indicate that the relationship between survival and reproduction is redefined or disassociated as populations change over time or that the ratio of juveniles to total non-pups is not a meaningful index of vital rates. Overall, our RSSL models show advantages to linking survival and reproduction within models to estimate the vital rates of pinnipeds and other species that have limited time-series of counts.
Sadatsafavi, Mohsen; Sin, Don D.; Zafari, Zafar; Criner, Gerard; Connett, John E.; Lazarus, Stephen; Han, Meilan; Martinez, Fernando; Albert, Richard
2016-01-01
Exacerbations are a hallmark of chronic obstructive pulmonary disease (COPD). Evidence suggests the presence of substantial between-individual variability (heterogeneity) in exacerbation rates. The question of whether individuals vary in their tendency towards experiencing severe (versus mild) exacerbations, or whether there is an association between exacerbation rate and severity, has not yet been studied. We used data from the MACRO Study, a 1-year randomized trial of the use of azithromycin for prevention of COPD exacerbations (United States and Canada, 2006–2010; n = 1,107, mean age = 65.2 years, 59.1% male). A parametric frailty model was combined with a logistic regression model, with bivariate random effects capturing heterogeneity in rate and severity. The average rate of exacerbation was 1.53 episodes/year, with 95% of subjects having a model-estimated rate of 0.47–4.22 episodes/year. The overall ratio of severe exacerbations to total exacerbations was 0.22, with 95% of subjects having a model-estimated ratio of 0.04–0.60. We did not confirm an association between exacerbation rate and severity (P = 0.099). A unified model, implemented in standard software, could estimate joint heterogeneity in COPD exacerbation rate and severity and can have applications in similar contexts where inference on event time and intensity is considered. We provide SAS code (SAS Institute, Inc., Cary, North Carolina) and a simulated data set to facilitate further uses of this method. PMID:27737842
Estimating under-five mortality in space and time in a developing world context.
Wakefield, Jon; Fuglstad, Geir-Arne; Riebler, Andrea; Godwin, Jessica; Wilson, Katie; Clark, Samuel J
2018-01-01
Accurate estimates of the under-five mortality rate in a developing world context are a key barometer of the health of a nation. This paper describes a new model to analyze survey data on mortality in this context. We are interested in both spatial and temporal description, that is wishing to estimate under-five mortality rate across regions and years and to investigate the association between the under-five mortality rate and spatially varying covariate surfaces. We illustrate the methodology by producing yearly estimates for subnational areas in Kenya over the period 1980-2014 using data from the Demographic and Health Surveys, which use stratified cluster sampling. We use a binomial likelihood with fixed effects for the urban/rural strata and random effects for the clustering to account for the complex survey design. Smoothing is carried out using Bayesian hierarchical models with continuous spatial and temporally discrete components. A key component of the model is an offset to adjust for bias due to the effects of HIV epidemics. Substantively, there has been a sharp decline in Kenya in the under-five mortality rate in the period 1980-2014, but large variability in estimated subnational rates remains. A priority for future research is understanding this variability. In exploratory work, we examine whether a variety of spatial covariate surfaces can explain the variability in under-five mortality rate. Temperature, precipitation, a measure of malaria infection prevalence, and a measure of nearness to cities were candidates for inclusion in the covariate model, but the interplay between space, time, and covariates is complex.
Estimation of mortality for stage-structured zooplankton populations: What is to be done?
NASA Astrophysics Data System (ADS)
Ohman, Mark D.
2012-05-01
Estimation of zooplankton mortality rates in field populations is a challenging task that some contend is inherently intractable. This paper examines several of the objections that are commonly raised to efforts to estimate mortality. We find that there are circumstances in the field where it is possible to sequentially sample the same population and to resolve biologically caused mortality, albeit with error. Precision can be improved with sampling directed by knowledge of the physical structure of the water column, combined with adequate sample replication. Intercalibration of sampling methods can make it possible to sample across the life history in a quantitative manner. Rates of development can be constrained by laboratory-based estimates of stage durations from temperature- and food-dependent functions, mesocosm studies of molting rates, or approximation of development rates from growth rates, combined with the vertical distributions of organisms in relation to food and temperature gradients. Careful design of field studies guided by the assumptions of specific estimation models can lead to satisfactory mortality estimates, but model uncertainty also needs to be quantified. We highlight additional issues requiring attention to further advance the field, including the need for linked cooperative studies of the rates and causes of mortality of co-occurring holozooplankton and ichthyoplankton.
Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L
2011-06-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.
Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.
2011-01-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566
Occupancy Modeling for Improved Accuracy and Understanding of Pathogen Prevalence and Dynamics
Colvin, Michael E.; Peterson, James T.; Kent, Michael L.; Schreck, Carl B.
2015-01-01
Most pathogen detection tests are imperfect, with a sensitivity < 100%, thereby resulting in the potential for a false negative, where a pathogen is present but not detected. False negatives in a sample inflate the number of non-detections, negatively biasing estimates of pathogen prevalence. Histological examination of tissues as a diagnostic test can be advantageous as multiple pathogens can be examined and providing important information on associated pathological changes to the host. However, it is usually less sensitive than molecular or microbiological tests for specific pathogens. Our study objectives were to 1) develop a hierarchical occupancy model to examine pathogen prevalence in spring Chinook salmon Oncorhynchus tshawytscha and their distribution among host tissues 2) use the model to estimate pathogen-specific test sensitivities and infection rates, and 3) illustrate the effect of using replicate within host sampling on sample sizes required to detect a pathogen. We examined histological sections of replicate tissue samples from spring Chinook salmon O. tshawytscha collected after spawning for common pathogens seen in this population: Apophallus/echinostome metacercariae, Parvicapsula minibicornis, Nanophyetus salmincola/ metacercariae, and Renibacterium salmoninarum. A hierarchical occupancy model was developed to estimate pathogen and tissue-specific test sensitivities and unbiased estimation of host- and organ-level infection rates. Model estimated sensitivities and host- and organ-level infections rates varied among pathogens and model estimated infection rate was higher than prevalence unadjusted for test sensitivity, confirming that prevalence unadjusted for test sensitivity was negatively biased. The modeling approach provided an analytical approach for using hierarchically structured pathogen detection data from lower sensitivity diagnostic tests, such as histology, to obtain unbiased pathogen prevalence estimates with associated uncertainties. Accounting for test sensitivity using within host replicate samples also required fewer individual fish to be sampled. This approach is useful for evaluating pathogen or microbe community dynamics when test sensitivity is <100%. PMID:25738709
Occupancy modeling for improved accuracy and understanding of pathogen prevalence and dynamics
Colvin, Michael E.; Peterson, James T.; Kent, Michael L.; Schreck, Carl B.
2015-01-01
Most pathogen detection tests are imperfect, with a sensitivity < 100%, thereby resulting in the potential for a false negative, where a pathogen is present but not detected. False negatives in a sample inflate the number of non-detections, negatively biasing estimates of pathogen prevalence. Histological examination of tissues as a diagnostic test can be advantageous as multiple pathogens can be examined and providing important information on associated pathological changes to the host. However, it is usually less sensitive than molecular or microbiological tests for specific pathogens. Our study objectives were to 1) develop a hierarchical occupancy model to examine pathogen prevalence in spring Chinook salmonOncorhynchus tshawytscha and their distribution among host tissues 2) use the model to estimate pathogen-specific test sensitivities and infection rates, and 3) illustrate the effect of using replicate within host sampling on sample sizes required to detect a pathogen. We examined histological sections of replicate tissue samples from spring Chinook salmon O. tshawytscha collected after spawning for common pathogens seen in this population:Apophallus/echinostome metacercariae, Parvicapsula minibicornis, Nanophyetus salmincola/metacercariae, and Renibacterium salmoninarum. A hierarchical occupancy model was developed to estimate pathogen and tissue-specific test sensitivities and unbiased estimation of host- and organ-level infection rates. Model estimated sensitivities and host- and organ-level infections rates varied among pathogens and model estimated infection rate was higher than prevalence unadjusted for test sensitivity, confirming that prevalence unadjusted for test sensitivity was negatively biased. The modeling approach provided an analytical approach for using hierarchically structured pathogen detection data from lower sensitivity diagnostic tests, such as histology, to obtain unbiased pathogen prevalence estimates with associated uncertainties. Accounting for test sensitivity using within host replicate samples also required fewer individual fish to be sampled. This approach is useful for evaluating pathogen or microbe community dynamics when test sensitivity is <100%.
Lopes, J S; Arenas, M; Posada, D; Beaumont, M A
2014-03-01
The estimation of parameters in molecular evolution may be biased when some processes are not considered. For example, the estimation of selection at the molecular level using codon-substitution models can have an upward bias when recombination is ignored. Here we address the joint estimation of recombination, molecular adaptation and substitution rates from coding sequences using approximate Bayesian computation (ABC). We describe the implementation of a regression-based strategy for choosing subsets of summary statistics for coding data, and show that this approach can accurately infer recombination allowing for intracodon recombination breakpoints, molecular adaptation and codon substitution rates. We demonstrate that our ABC approach can outperform other analytical methods under a variety of evolutionary scenarios. We also show that although the choice of the codon-substitution model is important, our inferences are robust to a moderate degree of model misspecification. In addition, we demonstrate that our approach can accurately choose the evolutionary model that best fits the data, providing an alternative for when the use of full-likelihood methods is impracticable. Finally, we applied our ABC method to co-estimate recombination, substitution and molecular adaptation rates from 24 published human immunodeficiency virus 1 coding data sets.
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Harter, T.
2015-12-01
Accurate estimation of groundwater (GW) budgets and effective management of agricultural GW pumping remains a challenge in much of California's Central Valley (CV) due to a lack of irrigation well metering. CVHM and C2VSim are two regional-scale integrated hydrologic models that provide estimates of historical and current CV distributed pumping rates. However, both models estimate GW pumping using conceptually different agricultural water models with uncertainties that have not been adequately investigated. Here, we evaluate differences in distributed agricultural GW pumping and recharge estimates related to important differences in the conceptual framework and model assumptions used to simulate surface water (SW) and GW interaction across the root zone. Differences in the magnitude and timing of GW pumping and recharge were evaluated for a subregion (~1000 mi2) coincident with Yolo County, CA, to provide similar initial and boundary conditions for both models. Synthetic, multi-year datasets of land-use, precipitation, evapotranspiration (ET), and SW deliveries were prescribed for each model to provide realistic end-member scenarios for GW-pumping demand and recharge. Results show differences in the magnitude and timing of GW-pumping demand, deep percolation, and recharge. Discrepancies are related, in large part, to model differences in the estimation of ET requirements and representation of soil-moisture conditions. CVHM partitions ET demand, while C2VSim uses a bulk ET rate, resulting in differences in both crop-water and GW-pumping demand. Additionally, CVHM assumes steady-state soil-moisture conditions, and simulates deep percolation as a function of irrigation inefficiencies, while C2VSim simulates deep percolation as a function of transient soil-moisture storage conditions. These findings show that estimates of GW-pumping demand are sensitive to these important conceptual differences, which can impact conjunctive-use water management decisions in the CV.
High mitochondrial mutation rates estimated from deep-rooting Costa Rican pedigrees
Madrigal, Lorena; Melendez-Obando, Mauricio; Villegas-Palma, Ramon; Barrantes, Ramiro; Raventos, Henrieta; Pereira, Reynaldo; Luiselli, Donata; Pettener, Davide; Barbujani, Guido
2012-01-01
Estimates of mutation rates for the noncoding hypervariable Region I (HVR-I) of mitochondrial DNA (mtDNA) vary widely, depending on whether they are inferred from phylogenies (assuming that molecular evolution is clock-like) or directly from pedigrees. All pedigree-based studies so far were conducted on populations of European origin. In this paper we analyzed 19 deep-rooting pedigrees in a population of mixed origin in Costa Rica. We calculated two estimates of the HVR-I mutation rate, one considering all apparent mutations, and one disregarding changes at sites known to be mutational hot spots and eliminating genealogy branches which might be suspected to include errors, or unrecognized adoptions along the female lines. At the end of this procedure, we still observed a mutation rate equal to 1.24 × 10−6, per site per year, i.e., at least three-fold as high as estimates derived from phylogenies. Our results confirm that mutation rates observed in pedigrees are much higher than estimated assuming a neutral model of long-term HVRI evolution. We argue that, until the cause of these discrepancies will be fully understood, both lower estimates (i.e., those derived from phylogenetic comparisons) and higher, direct estimates such as those obtained in this study, should be considered when modeling evolutionary and demographic processes. PMID:22460349
The manuscript reviews the issues concerning the use of results on pesticide effects from laboratory avian reproduction tests for estimating potential impacts of pesticides on fecundity rates in avian population models.
Oxygen transfer rate estimation in oxidation ditches from clean water measurements.
Abusam, A; Keesman, K J; Meinema, K; Van Straten, G
2001-06-01
Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).
Modeling nitrous oxide emission from rivers: a global assessment.
Hu, Minpeng; Chen, Dingjiang; Dahlgren, Randy A
2016-11-01
Estimates of global riverine nitrous oxide (N 2 O) emissions contain great uncertainty. We conducted a meta-analysis incorporating 169 observations from published literature to estimate global riverine N 2 O emission rates and emission factors. Riverine N 2 O flux was significantly correlated with NH 4 , NO 3 and DIN (NH 4 + NO 3 ) concentrations, loads and yields. The emission factors EF(a) (i.e., the ratio of N 2 O emission rate and DIN load) and EF(b) (i.e., the ratio of N 2 O and DIN concentrations) values were comparable and showed negative correlations with nitrogen concentration, load and yield and water discharge, but positive correlations with the dissolved organic carbon : DIN ratio. After individually evaluating 82 potential regression models based on EF(a) or EF(b) for global, temperate zone and subtropical zone datasets, a power function of DIN yield multiplied by watershed area was determined to provide the best fit between modeled and observed riverine N 2 O emission rates (EF(a): R 2 = 0.92 for both global and climatic zone models, n = 70; EF(b): R 2 = 0.91 for global model and R 2 = 0.90 for climatic zone models, n = 70). Using recent estimates of DIN loads for 6400 rivers, models estimated global riverine N 2 O emission rates of 29.6-35.3 (mean = 32.2) Gg N 2 O-N yr -1 and emission factors of 0.16-0.19% (mean = 0.17%). Global riverine N 2 O emission rates are forecasted to increase by 35%, 25%, 18% and 3% in 2050 compared to the 2000s under the Millennium Ecosystem Assessment's Global Orchestration, Order from Strength, Technogarden, and Adapting Mosaic scenarios, respectively. Previous studies may overestimate global riverine N 2 O emission rates (300-2100 Gg N 2 O-N yr -1 ) because they ignore declining emission factor values with increasing nitrogen levels and channel size, as well as neglect differences in emission factors corresponding to different nitrogen forms. Riverine N 2 O emission estimates will be further enhanced through refining emission factor estimates, extending measurements longitudinally along entire river networks and improving estimates of global riverine nitrogen loads. © 2016 John Wiley & Sons Ltd.
Demography of the Pacific walrus (Odobenus rosmarus divergens): 1974-2006
Taylor, Rebecca L.; Udevitz, Mark S.
2015-01-01
Global climate change may fundamentally alter population dynamics of many species for which baseline population parameter estimates are imprecise or lacking. Historically, the Pacific walrus is thought to have been limited by harvest, but it may become limited by global warming-induced reductions in sea ice. Loss of sea ice, on which walruses rest between foraging bouts, may reduce access to food, thus lowering vital rates. Rigorous walrus survival rate estimates do not exist, and other population parameter estimates are out of date or have well-documented bias and imprecision. To provide useful population parameter estimates we developed a Bayesian, hidden process demographic model of walrus population dynamics from 1974 through 2006 that combined annual age-specific harvest estimates with five population size estimates, six standing age structure estimates, and two reproductive rate estimates. Median density independent natural survival was high for juveniles (0.97) and adults (0.99), and annual density dependent vital rates rose from 0.06 to 0.11 for reproduction, 0.31 to 0.59 for survival of neonatal calves, and 0.39 to 0.85 for survival of older calves, concomitant with a population decline. This integrated population model provides a baseline for estimating changing population dynamics resulting from changing harvests or sea ice.
Pendrith, Ciara; Thind, Amardeep; Zaric, Gregory S; Sarma, Sisira
2016-08-01
The primary objective of this paper is to compare cervical cancer screening rates of family physicians in Ontario's two dominant reformed practice models, Family Health Group (FHG) and Family Health Organization (FHO), and traditional fee-for-service (FFS) model. Both reformed models formally enrol patients and offer extensive pay-for-performance incentives; however, they differ by remuneration for core services (FHG is FFS; FHO is capitated). The secondary objective is to estimate the average and marginal costs of screening in each model. Using administrative data on 7,298 family physicians and their 2,083,633 female patients aged 35-69 eligible for cervical cancer screening in 2011, we assessed screening rates after adjusting for patient and physician characteristics. Predicted screening rates, fees and bonus payments were used to estimate the average and marginal costs of cervical cancer screening. Adjusted screening rates were highest in the FHG (81.9%), followed by the FHO (79.6%), and then the traditional FFS model (74.2%). The cost of a cervical cancer screening was $18.30 in the FFS model. The estimated average cost of screening in the FHGs and FHOs were $29.71 and $35.02, respectively, while the corresponding marginal costs were $33.05 and $39.06. We found significant differences in cervical cancer screening rates across Ontario's primary care practice models. Cervical screening rates were significantly higher in practice models eligible for incentives (FHGs and FHOs) than the traditional FFS model. However, the average and marginal cost of screening were lowest in the traditional FFS model and highest in the FHOs. Copyright © 2016 Longwoods Publishing.
Santini, Luca; Cornulier, Thomas; Bullock, James M; Palmer, Stephen C F; White, Steven M; Hodgson, Jenny A; Bocedi, Greta; Travis, Justin M J
2016-07-01
Estimating population spread rates across multiple species is vital for projecting biodiversity responses to climate change. A major challenge is to parameterise spread models for many species. We introduce an approach that addresses this challenge, coupling a trait-based analysis with spatial population modelling to project spread rates for 15 000 virtual mammals with life histories that reflect those seen in the real world. Covariances among life-history traits are estimated from an extensive terrestrial mammal data set using Bayesian inference. We elucidate the relative roles of different life-history traits in driving modelled spread rates, demonstrating that any one alone will be a poor predictor. We also estimate that around 30% of mammal species have potential spread rates slower than the global mean velocity of climate change. This novel trait-space-demographic modelling approach has broad applicability for tackling many key ecological questions for which we have the models but are hindered by data availability. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Modeling the Declining Positivity Rates for Human Immunodeficiency Virus Testing in New York State.
Martin, Erika G; MacDonald, Roderick H; Smith, Lou C; Gordon, Daniel E; Lu, Tao; OʼConnell, Daniel A
2015-01-01
New York health care providers have experienced declining percentages of positive human immunodeficiency virus (HIV) tests among patients. Furthermore, observed positivity rates are lower than expected on the basis of the national estimate that one-fifth of HIV-infected residents are unaware of their infection. We used mathematical modeling to evaluate whether this decline could be a result of declining numbers of HIV-infected persons who are unaware of their infection, a measure that is impossible to measure directly. A stock-and-flow mathematical model of HIV incidence, testing, and diagnosis was developed. The model includes stocks for uninfected, infected and unaware (in 4 disease stages), and diagnosed individuals. Inputs came from published literature and time series (2006-2009) for estimated new infections, newly diagnosed HIV cases, living diagnosed cases, mortality, and diagnosis rates in New York. Primary model outcomes were the percentage of HIV-infected persons unaware of their infection and the percentage of HIV tests with a positive result (HIV positivity rate). In the base case, the estimated percentage of unaware HIV-infected persons declined from 14.2% in 2006 (range, 11.9%-16.5%) to 11.8% in 2010 (range, 9.9%-13.1%). The HIV positivity rate, assuming testing occurred independent of risk, was 0.12% in 2006 (range, 0.11%-0.15%) and 0.11% in 2010 (range, 0.10%-0.13%). The observed HIV positivity rate was more than 4 times the expected positivity rate based on the model. HIV test positivity is a readily available indicator, but it cannot distinguish causes of underlying changes. Findings suggest that the percentage of unaware HIV-infected New Yorkers is lower than the national estimate and that the observed HIV test positivity rate is greater than expected if infected and uninfected individuals tested at the same rate, indicating that testing efforts are appropriately targeting undiagnosed cases.
Estimates of cellular mutagenesis from cosmic rays
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wilson, John W.
1994-01-01
A parametric track structure model is used to estimate the cross section as a function of particle velocity and charge for mutations at the hypoxanthine guanine phosphoribosyl transferase (HGPRT) locus in human fibroblast cell cultures. Experiments that report the fraction of mutations per surviving cell for human lung and skin fibroblast cells indicate small differences in the mutation cross section for these two cell lines when differences in inactivation rates between these cell lines are considered. Using models of cosmic ray transport, the mutation rate at the HGPRT locus is estimated for cell cultures in space flight and rates of about 2 to 10 x 10(exp -6) per year are found for typical spacecraft shielding. A discussion of how model assumptions may alter the predictions is also presented.
Photoionization-regulated star formation and the structure of molecular clouds
NASA Technical Reports Server (NTRS)
Mckee, Christopher F.
1989-01-01
A model for the rate of low-mass star formation in Galactic molecular clouds and for the influence of this star formation on the structure and evolution of the clouds is presented. The rate of energy injection by newly formed stars is estimated, and the effect of this energy injection on the size of the cloud is determined. It is shown that the observed rate of star formation appears adequate to support the observed clouds against gravitational collapse. The rate of photoionization-regulated star formation is estimated and it is shown to be in agreement with estimates of the observed rate of star formation if the observed molecular cloud parameters are used. The mean cloud extinction and the Galactic star formation rate per unit mass of molecular gas are predicted theoretically from the condition that photionization-regulated star formation be in equilibrium. A simple model for the evolution of isolated molecular clouds is developed.
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
USERS MANUAL: LANDFILL GAS EMISSIONS MODEL - VERSION 2.0
The document is a user's guide for a computer model, Version 2.0 of the Landfill Gas Emissions Model (LandGEM), for estimating air pollution emissions from municipal solid waste (MSW) landfills. The model can be used to estimate emission rates for methane, carbon dioxide, nonmet...
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu
2018-01-01
Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.
Vivot, Alexandre; Power, Melinda C.; Glymour, M. Maria; Mayeda, Elizabeth R.; Benitez, Andreana; Spiro, Avron; Manly, Jennifer J.; Proust-Lima, Cécile; Dufouil, Carole; Gross, Alden L.
2016-01-01
Improvements in cognitive test scores upon repeated assessment due to practice effects (PEs) are well documented, but there is no empirical evidence on whether alternative specifications of PEs result in different estimated associations between exposure and rate of cognitive change. If alternative PE specifications produce different estimates of association between an exposure and rate of cognitive change, this would be a challenge for nearly all longitudinal research on determinants of cognitive aging. Using data from 3 cohort studies—the Three-City Study–Dijon (Dijon, France, 1999–2010), the Normative Aging Study (Greater Boston, Massachusetts, 1993–2007), and the Washington Heights-Inwood Community Aging Project (New York, New York, 1999–2012)—for 2 exposures (diabetes and depression) and 3 cognitive outcomes, we compared results from longitudinal models using alternative PE specifications: no PEs; use of an indicator for the first cognitive visit; number of prior testing occasions; and square root of the number of prior testing occasions. Alternative specifications led to large differences in the estimated rates of cognitive change but minimal differences in estimated associations of exposure with cognitive level or change. Based on model fit, using an indicator for the first visit was often (but not always) the preferred model. PE specification can lead to substantial differences in estimated rates of cognitive change, but in these diverse examples and study samples it did not substantively affect estimated associations of risk factors with change. PMID:26825924
SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA
Fosdick, Bailey K.; Hoff, Peter D.
2014-01-01
Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353
NASA Astrophysics Data System (ADS)
Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.
2012-05-01
The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.
Rayne, Sierra; Forest, Kaya; Friesen, Ken J
2009-08-01
A quantitative structure-activity model has been validated for estimating congener specific gas-phase hydroxyl radical reaction rates for perfluoroalkyl sulfonic acids (PFSAs), carboxylic acids (PFCAs), aldehydes (PFAls) and dihydrates, fluorotelomer olefins (FTOls), alcohols (FTOHs), aldehydes (FTAls), and acids (FTAcs), and sulfonamides (SAs), sulfonamidoethanols (SEs), and sulfonamido carboxylic acids (SAAs), and their alkylated derivatives based on calculated semi-empirical PM6 method ionization potentials. Corresponding gas-phase reaction rates with nitrate radicals and ozone have also been estimated using the computationally derived ionization potentials. Henry's law constants for these classes of perfluorinated compounds also appear to be reasonably approximated by the SPARC software program, thereby allowing estimation of wet and dry atmospheric deposition rates. Both congener specific gas-phase atmospheric and air-water interface fractionation of these compounds is expected, complicating current source apportionment perspectives and necessitating integration of such differential partitioning influences into future multimedia models. The findings will allow development and refinement of more accurate and detailed local through global scale atmospheric models for the atmospheric fate of perfluoroalkyl compounds.
Bekessy, A.; Molineaux, L.; Storey, J.
1976-01-01
A method is described of estimating the malaria incidence rate ĥ and the recovery rate r from longitudinal data. The method is based on the assumption that the phenomenon of patent parasitaemia can be represented by a reversible two-state catalytic model; it is applicable to all problems that can be represented by such a model. The method was applied to data on falciparum malaria from the West African savanna and the findings suggested that immunity increases the rate of recovery from patent parasitaemia by a factor of up to 10, and also reduces the number of episodes of patent parasitaemia resulting from one inoculation. Under the effect of propoxur, ĥ varies with the estimated man-biting rate of the vector while r̂ increases, possibly owing to reduced super-infection. PMID:800968
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Quadratic semiparametric Von Mises calculus
Robins, James; Li, Lingling; Tchetgen, Eric
2009-01-01
We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Estimation of dioxin and furan elimination rates with a pharmacokinetic model.
Van der Molen, G W; Kooijman, B A; Wittsiepe, J; Schrey, P; Flesch-Janys, D; Slob, W
2000-01-01
Quantitative description of the pharmacokinetics of dioxins and furans in humans can be of great help for the assessment of health risks posed by these compounds. To that the elimination rates of sixteen 2,3,7,8-chlorinated dibenzodioxins and dibenzofurans are estimated from both a longitudinal and a cross-sectional data set using the model of Van der Molen et al. [Van der Molen G.W., Kooijman S.A.L.M., and Slob W. A generic toxicokinetic model for persistent lipophilic compounds in humans: an application to TCDD. Fundam Appl Toxicol 1996: 31: 83-94]. In this model the elimination rate is given by the (constant) specific elimination rate multiplied with the ratio between the lipid weight of the liver and total body lipid weight. Body composition, body weight and intake are assumed to depend on age. The elimination rate is, therefore, not constant. For 49-year-old males, the elimination rate estimates range between 0.03 per year for 1,2,3,6,7,8-hexaCDF to 1.0 per year for octaCDF. The elimination rates of the most toxic congeners, 2,3,7,8-tetraCDD, 1,2,3,7,8-pentaCDD, and 2,3,4,7,8-pentaCDF, were estimated at 0.09, 0.06, and 0.07, respectively, based on the cross-sectional data, and 0.11, 0.09, and 0.09 based on the longitudinal data. The elimination rates of dioxins decrease with age between 0.0011 per year for 1,2,3,6,7,8-hexaCDD and 0.0035 per year for 1,2,3,4,6,7,8-heptaCDD. For furans the average decrease is 0.0033 per year. The elimination rates were estimated both from a longitudinal and a cross-sectional data set, and agreed quite well with each other, after taking account of historical changes in average intake levels.
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene
2016-01-01
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
Double neutron stars: merger rates revisited
NASA Astrophysics Data System (ADS)
Chruslinska, Martyna; Belczynski, Krzysztof; Klencki, Jakub; Benacquista, Matthew
2018-03-01
We revisit double neutron star (DNS) formation in the classical binary evolution scenario in light of the recent Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo DNS detection (GW170817). The observationally estimated Galactic DNS merger rate of R_MW = 21^{+28}_{-14} Myr-1, based on three Galactic DNS systems, fully supports our standard input physics model with RMW = 24 Myr-1. This estimate for the Galaxy translates in a non-trivial way (due to cosmological evolution of progenitor stars in chemically evolving Universe) into a local (z ≈ 0) DNS merger rate density of Rlocal = 48 Gpc-3 yr-1, which is not consistent with the current LIGO/Virgo DNS merger rate estimate (1540^{+3200}_{-1220} Gpc-3 yr-1). Within our study of the parameter space, we find solutions that allow for DNS merger rates as high as R_local ≈ 600^{+600}_{-300} Gpc-3 yr-1 which are thus consistent with the LIGO/Virgo estimate. However, our corresponding BH-BH merger rates for the models with high DNS merger rates exceed the current LIGO/Virgo estimate of local BH-BH merger rate (12-213 Gpc-3 yr-1). Apart from being particularly sensitive to the common envelope treatment, DNS merger rates are rather robust against variations of several of the key factors probed in our study (e.g. mass transfer, angular momentum loss, and natal kicks). This might suggest that either common envelope development/survival works differently for DNS (˜10-20 M⊙ stars) than for BH-BH (˜40-100 M⊙ stars) progenitors, or high black hole (BH) natal kicks are needed to meet observational constraints for both types of binaries. Our conclusion is based on a limited number of (21) evolutionary models and is valid within this particular DNS and BH-BH isolated binary formation scenario.
Production rates for crews using hand tools on firelines
Lisa Haven; T. Parkin Hunter; Theodore G. Storey
1982-01-01
Reported rates at which hand crews construct firelines can vary widely because of differences in fuels, fire and measurement conditions, and fuel resistance-to-control classification schemes. Real-time fire dispatching and fire simulation planning models, however, require accurate estimates of hand crew productivity. Errors in estimating rate of fireline production...
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
Introduction to State Estimation of High-Rate System Dynamics
Dodson, Jacob; Joyce, Bryan
2018-01-01
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer’s convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model. PMID:29342855
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
The Effects of Vehicle Redesign on the Risk of Driver Death.
Farmer, Charles M; Lund, Adrian K
2015-01-01
This study updates a 2006 report that estimated the historical effects of vehicle design changes on driver fatality rates in the United States, separate from the effects of environmental and driver behavior changes during the same period. In addition to extending the period covered by 8 years, this study estimated the effect of design changes by model year and vehicle type. Driver death rates for consecutive model years of vehicle models without design changes were used to estimate the vehicle aging effect and the death rates that would have been expected if the entire fleet had remained unchanged from the 1985 calendar year. These calendar year estimates are taken to be the combined effect of road environment and motorist behavioral changes, with the difference between them and the actual calendar year driver fatality rates reflecting the effect of changes in vehicle design and distribution of vehicle types. The effects of vehicle design changes by model year were estimated for cars, SUVs, and pickups by computing driver death rates for model years 1984-2009 during each of their first 3 full calendar years of exposure and comparing with the expected rates if there had been no design changes. As reported in the 2006 study, had there been no changes in the vehicle fleet, driver death risk would have declined during calendar years 1985-1993 and then slowly increased from 1993 to 2004. The updated results indicate that the gradual increase would have continued through 2006, after which driver fatality rates again would have declined through 2012. Overall, it is estimated that there were 7,700 fewer driver deaths in 2012 than there would have been had vehicle designs not changed. Cars were the first vehicle type whose design safety generally exceeded that of the 1984 model year (starting in model year 1996), followed by SUVs (1998 models) and pickups (2002 models). By the 2009 model year, car driver fatality risk had declined 51% from its high in 1994, pickup driver fatality risk had declined 61% from its high in 1988, and SUV risk had declined 79% from its high in 1988. The risk of driver death in 2009 model passenger vehicles was 8% lower than that in 2008 models and about half that in 1984 models. Changes in vehicles, whether from government regulations and consumer testing that led to advanced safety designs or from other factors such as consumer demand for different sizes and types of vehicles, have been key contributors to the decline in U.S. motor vehicle occupant crash death rates since the mid-1990s. Since the early 1990s, environmental and behavioral risk factors have not shown similar improvement, until the recession of 2007, even though there are many empirically proven countermeasures that have been inadequately applied.
Bell, Michael W; Tang, Y Sim; Dragosits, Ulrike; Flechard, Chris R; Ward, Paul; Braban, Christine F
2016-10-01
Anaerobic digestion (AD) is becoming increasingly implemented within organic waste treatment operations. The storage and processing of large volumes of organic wastes through AD has been identified as a significant source of ammonia (NH3) emissions, however the totality of ammonia emissions from an AD plant have not been previously quantified. The emissions from an AD plant processing food waste were estimated through integrating ambient NH3 concentration measurements, atmospheric dispersion modelling, and comparison with published emission factors (EFs). Two dispersion models (ADMS and a backwards Lagrangian stochastic (bLS) model) were applied to calculate emission estimates. The bLS model (WindTrax) was used to back-calculate a total (top-down) emission rate for the AD plant from a point of continuous NH3 measurement downwind from the plant. The back-calculated emission rates were then input to the ADMS forward dispersion model to make predictions of air NH3 concentrations around the site, and evaluated against weekly passive sampler NH3 measurements. As an alternative approach emission rates from individual sources within the plant were initially estimated by applying literature EFs to the available site parameters concerning the chemical composition of waste materials, room air concentrations, ventilation rates, etc. The individual emission rates were input to ADMS and later tuned by fitting the simulated ambient concentrations to the observed (passive sampler) concentration field, which gave an excellent match to measurements after an iterative process. The total emission from the AD plant thus estimated by a bottom-up approach was 16.8±1.8mgs(-1), which was significantly higher than the back-calculated top-down estimate (7.4±0.78mgs(-1)). The bottom-up approach offered a more realistic treatment of the source distribution within the plant area, while the complexity of the site was not ideally suited to the bLS method, thus the bottom-up method is believed to give a better estimate of emissions. The storage of solid digestate and the aerobic treatment of liquid effluents at the site were the greatest sources of NH3 emissions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aylward, Lesa L; Brunet, Robert C; Starr, Thomas B; Carrier, Gaétan; Delzell, Elizabeth; Cheng, Hong; Beall, Colleen
2005-08-01
Recent studies demonstrating a concentration dependence of elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) suggest that previous estimates of exposure for occupationally exposed cohorts may have underestimated actual exposure, resulting in a potential overestimate of the carcinogenic potency of TCDD in humans based on the mortality data for these cohorts. Using a database on U.S. chemical manufacturing workers potentially exposed to TCDD compiled by the National Institute for Occupational Safety and Health (NIOSH), we evaluated the impact of using a concentration- and age-dependent elimination model (CADM) (Aylward et al., 2005) on estimates of serum lipid area under the curve (AUC) for the NIOSH cohort. These data were used previously by Steenland et al. (2001) in combination with a first-order elimination model with an 8.7-year half-life to estimate cumulative serum lipid concentration (equivalent to AUC) for these workers for use in cancer dose-response assessment. Serum lipid TCDD measurements taken in 1988 for a subset of the cohort were combined with the NIOSH job exposure matrix and work histories to estimate dose rates per unit of exposure score. We evaluated the effect of choices in regression model (regression on untransformed vs. ln-transformed data and inclusion of a nonzero regression intercept) as well as the impact of choices of elimination models and parameters on estimated AUCs for the cohort. Central estimates for dose rate parameters derived from the serum-sampled subcohort were applied with the elimination models to time-specific exposure scores for the entire cohort to generate AUC estimates for all cohort members. Use of the CADM resulted in improved model fits to the serum sampling data compared to the first-order models. Dose rates varied by a factor of 50 among different combinations of elimination model, parameter sets, and regression models. Use of a CADM results in increases of up to five-fold in AUC estimates for the more highly exposed members of the cohort compared to estimates obtained using the first-order model with 8.7-year half-life. This degree of variation in the AUC estimates for this cohort would affect substantially the cancer potency estimates derived from the mortality data from this cohort. Such variability and uncertainty in the reconstructed serum lipid AUC estimates for this cohort, depending on elimination model, parameter set, and regression model, have not been described previously and are critical components in evaluating the dose-response data from the occupationally exposed populations.
Inclination Dependence of Estimated Galaxy Masses and Star Formation Rates
NASA Astrophysics Data System (ADS)
Hernandez, Betsy; Maller, Ariyeh; McKernan, Barry; Ford, Saavik
2016-01-01
We examine the inclination dependence of inferred star formation rates and galaxy mass estimates in the Sloan Digital Sky Survey by combining the disk/bulge de-convolved catalog of Simard et al 2011 with stellar mass estimates catalog of Mendel et al 2014 and star formation rates measured from spectra by Brinchmann et al 2004. We know that optical star formation indicators are reddened by dust, but calculated star formation rates and stellar mass estimates should account for this. However, we find that face-on galaxies have a higher calculated average star formation rates than edge-on galaxies. We also find edge-on galaxies have ,on average, slightly smaller but similar estimated masses to face-on galaxies, suggesting that there are issues with the applied dust corrections for both models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Van Til, Harrison J; Wood, Eric W
A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any typemore » of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.« less
Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C
2012-06-15
Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.
Du, P; Walling, D E
2012-01-01
Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by the ¹³⁷Cs and excess ²¹⁰Pb measurements, some differences existed and the various models used to interpret excess ²¹⁰Pb measurements could produce different results. By using the ¹³⁷Cs measurements to provide independent validation of the estimates of sedimentation rate provided by the different models used with the excess ²¹⁰Pb measurement it was shown that the CICCS and Composite CRS models appeared to generally provide the best results. Copyright © 2011 Elsevier Ltd. All rights reserved.
Prospects for a prolonged slowdown in global warming in the early 21st century
Knutson, Thomas R.; Zhang, Rong; Horowitz, Larry W.
2016-01-01
Global mean temperature over 1998 to 2015 increased at a slower rate (0.1 K decade−1) compared with the ensemble mean (forced) warming rate projected by Coupled Model Intercomparison Project 5 (CMIP5) models (0.2 K decade−1). Here we investigate the prospects for this slower rate to persist for a decade or more. The slower rate could persist if the transient climate response is overestimated by CMIP5 models by a factor of two, as suggested by recent low-end estimates. Alternatively, using CMIP5 models' warming rate, the slower rate could still persist due to strong multidecadal internal variability cooling. Combining the CMIP5 ensemble warming rate with internal variability episodes from a single climate model—having the strongest multidecadal variability among CMIP5 models—we estimate that the warming slowdown (<0.1 K decade−1 trend beginning in 1998) could persist, due to internal variability cooling, through 2020, 2025 or 2030 with probabilities 16%, 11% and 6%, respectively. PMID:27901045
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Pachú, Jéssica Ks; Malaquias, José B; Godoy, Wesley Ac; de S Ramalho, Francisco; Almeida, Bruna R; Rossi, Fabrício
2018-04-01
Precise estimates of the lower (T min ) and higher (T max ) thermal thresholds as well as the temperature range that provides optimum performance (T opt ) enable to obtain the desired number of individuals in conservation systems, rearing and release of natural enemies. In this study, the relationship between the development rates of Cycloneda sanguinea L. (Coleoptera: Coccinelidae) and temperature was described using non-linear models developed by Analytis, Brière, Lactin, Lamb, Logan and Sharpe & DeMichele. There were differences between the models, considering the estimates of the parameters T min , T max , and T opt . All of the tested models were able to describe non-linear responses involving the development rates of C. sanguinea at constant temperatures. Lactin and Lamb gave the highest z weight for egg, while Analytis, Sharpe & DeMichele and Brière gave the highest values for larvae and pupae. The more realistic T opt estimated by the models varied from 29° to 31°C for egg, 27-28 °C for larvae and 28-29 °C for pupae. The Logan, Lactin and Analytis models estimated the T max for egg, larvae and pupae to be approximately 34 °C, while the T min estimated by the Analytis model was 16 °C for larvae and pupae. The information generated by our research will contribute towards improving the rearing and release of C. sanguinea in biological control programs, accurately controlling the rate of development in laboratory conditions or even scheduling the most favourable this species' release. Copyright © 2018 Elsevier Ltd. All rights reserved.
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi
2013-01-01
Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597
Knowles, Leel
1996-01-01
Estimates of evapotranspiration (ET) for the Rainbow and Silver Springs ground-water basins in north-central Florida were determined using a regional water-~budget approach and compared to estimates computed using a modified Priestley-Taylor (PT) model calibrated with eddy-correlation data. Eddy-correlation measurements of latent 0~E) and sensible (H) heat flux were made monthly for a few days at a time, and the PT model was used to estimate 3,E between times of measurement during the 1994 water year. A water-budget analysis for the two-basin area indicated that over a 30-year period (196594) annual rainfall was 51.7 inches. Of the annual rainfall, ET accounted for about 37.9 inches; springflow accounted for 13.1 inches; and the remaining 0.7 inch was accounted for by stream-flow, by ground-water withdrawals from the Floridan aquifer system, and by net change in storage. For the same 30-year period, the annual estimate of ET for the Silver Springs basin was 37.6 inches and was 38.5 inches for the Rainbow Springs basin. Wet- and dry-season estimates of ET for each basin averaged between nearly 19 inches and 20 inches, indicating that like rainfall, ET rates during the 4-month wet season were about twice the ET rates during the 8-month dry season. Wet-season estimates of ET for the Rainbow Springs and Silver Springs basins decreased 2.7 inches, and 3.4 inches, respectively, over the 30-year period; whereas, dry-season estimates for the basins decreased about 0.4 inch and1.0 inch, respectively, over the 30-year period. This decrease probably is related to the general decrease in annual rainfall and reduction in net radiation over the basins during the 30-year period. ET rates computed using the modified PT model were compared to rates computed from the water budget for the 1994 water year. Annual ET, computed using the PT model, was 32.0 inches, nearly equal to the ET water-budget estimate of 31.7 inches computed for the Rainbow Springs and Silver Springs basins. Modeled ET rates for 1994 ranged from 14.4 inches per year in January to 51.6 inches per year in May. Water-budget ET rates for 1994 ranged from 12.0 inches per year in March to 61.2 inches per year in July. Potential evapotranspiration rates for 1994 averaged 46.8 inches per year and ranged from 21.6 inches per year in January to 74.4 inches per year in May. Lake evaporation rates averaged 47.1 inches per year and ranged from 18.0 inches per year in January to 72.0 inches per year in May 1994.
Nayamatullah, M M M; Bin-Shafique, S; Sharif, H O
2013-01-01
To investigate the effect of input parameters, such as the number of bridge-dwelling birds, decay rate of the bacteria, flow at the river, water temperature, and settling velocity, a parametric study was conducted using a water quality model developed with QUAL2Kw. The reach of the bacterial-impaired section from the direct droppings of bridge-nesting birds at the Guadalupe River near Kerrville, Texas was estimated using the model. The concentration of Escherichia coli bacteria were measured upstream, below the bridge, and downstream of the river for one-and-a-half years. The decay rate of the indicator bacteria in the river water was estimated from the model using measured data, and was found to be 6.5/day. The study suggests that the number of bridge-dwelling birds, the decay rate, and flow at the river have the highest impact on the fate and transport of bacteria. The water temperature moderately affects the fate and transport of bacteria, whereas, the settling velocity of bacteria did not show any significant effect. Once the decay rates are estimated, the reach of the impaired section was predicted from the model using the average flow of the channel. Since the decay rate does not vary significantly in the ambient environment at this location, the length of the impaired section primarily depends on flow.
Stock modeling for railroad locomotives and marine vessels
DOT National Transportation Integrated Search
2004-09-01
Stock modeling is the process of estimating the number of pieces of equipment in service in a given year manufactured in each of all relevant prior years. This type of modeling is important for, among other things, estimating the rate at which new te...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodkin, J.L.; Udevitz, M.S.
The authors developed an analytical model (intersection model) to estimate the exposure of sea otters (Enhydra lutris), to oil from the Exxon Valdez oil spill. The authors applied estimated and assumed exposure dependent mortality rates to the Kenai Peninsula sea otter population to provide examples of the application of the model in estimating sea otter mortality. The intersection model requires three distinct types of data: (1) distribution, abundance, and movements of oil, (2) abundance and distribution of sea otters, and (3) sea otter mortality rates relative to oil exposure. Initial output of the model is an estimate of exposure ofmore » otters to oil. Exposure is measured in amount and duration of oil near an otter`s observed location (intersections). The authors provide two examples of the model using different assumptions about the relation between exposure and mortality. Because of an apparent non-linear relation between the degree of oiling and survival of otters from rehabilitation, output from the authors` examples are likely biased.« less
Hennig, Kristin; Verkerk, Ruud; Bonnema, Guusje; Dekker, Matthijs
2012-08-15
Kinetic modeling was used as a tool to quantitatively estimate glucosinolate thermal degradation rate constants. Literature shows that thermal degradation rates differ in different vegetables. Well-characterized plant material, leaves of broccoli and Chinese kale plants grown in two seasons, was used in the study. It was shown that a first-order reaction is appropriate to model glucosinolate degradation independent from the season. No difference in degradation rate constants of structurally identical glucosinolates was found between broccoli and Chinese kale leaves when grown in the same season. However, glucosinolate degradation rate constants were highly affected by the season (20-80% increase in spring compared to autumn). These results suggest that differences in glucosinolate degradation rate constants can be due to variation in environmental as well as genetic factors. Furthermore, a methodology to estimate rate constants rapidly is provided to enable the analysis of high sample numbers for future studies.
Koh, Dong-Hee; Bhatti, Parveen; Coble, Joseph B.; Stewart, Patricia A; Lu, Wei; Shu, Xiao-Ou; Ji, Bu-Tian; Xue, Shouzheng; Locke, Sarah J.; Portengen, Lutzen; Yang, Gong; Chow, Wong-Ho; Gao, Yu-Tang; Rothman, Nathaniel; Vermeulen, Roel; Friesen, Melissa C.
2012-01-01
The epidemiologic evidence for the carcinogenicity of lead is inconsistent and requires improved exposure assessment to estimate risk. We evaluated historical occupational lead exposure for a population-based cohort of women (n=74,942) by calibrating a job-exposure matrix (JEM) with lead fume (n=20,084) and lead dust (n=5,383) measurements collected over four decades in Shanghai, China. Using mixed-effect models, we calibrated intensity JEM ratings to the measurements using fixed-effects terms for year and JEM rating. We developed job/industry-specific estimates from the random-effects terms for job and industry. The model estimates were applied to subjects’ jobs when the JEM probability rating was high for either job or industry; remaining jobs were considered unexposed. The models predicted that exposure increased monotonically with JEM intensity rating and decreased 20–50-fold over time. The cumulative calibrated JEM estimates and job/industry-specific estimates were highly correlated (Pearson correlation=0.79–0.84). Overall, 5% of the person-years and 8% of the women were exposed to lead fume; 2% of the person-years and 4% of the women were exposed to lead dust. The most common lead-exposed jobs were manufacturing electronic equipment. These historical lead estimates should enhance our ability to detect associations between lead exposure and cancer risk in future epidemiologic analyses. PMID:22910004
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
Davis, Gregory B; Laslett, Dean; Patterson, Bradley M; Johnston, Colin D
2013-03-15
Accurate estimation of biodegradation rates during remediation of petroleum impacted soil and groundwater is critical to avoid excessive costs and to ensure remedial effectiveness. Oxygen depth profiles or oxygen consumption over time are often used separately to estimate the magnitude and timeframe for biodegradation of petroleum hydrocarbons in soil and subsurface environments. Each method has limitations. Here we integrate spatial and temporal oxygen concentration data from a field experiment to develop better estimates and more reliably quantify biodegradation rates. During a nine-month bioremediation trial, 84 sets of respiration rate data (where aeration was halted and oxygen consumption was measured over time) were collected from in situ oxygen sensors at multiple locations and depths across a diesel non-aqueous phase liquid (NAPL) contaminated subsurface. Additionally, detailed vertical soil moisture (air-filled porosity) and NAPL content profiles were determined. The spatial and temporal oxygen concentration (respiration) data were modeled assuming one-dimensional diffusion of oxygen through the soil profile which was open to the atmosphere. Point and vertically averaged biodegradation rates were determined, and compared to modeled data from a previous field trial. Point estimates of biodegradation rates assuming no diffusion ranged up to 58 mg kg(-1) day(-1) while rates accounting for diffusion ranged up to 87 mg kg(-1) day(-1). Typically, accounting for diffusion increased point biodegradation rate estimates by 15-75% and vertically averaged rates by 60-80% depending on the averaging method adopted. Importantly, ignoring diffusion led to overestimation of biodegradation rates where the location of measurement was outside the zone of NAPL contamination. Over or underestimation of biodegradation rate estimates leads to cost implications for successful remediation of petroleum impacted sites. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.
Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi
2008-08-01
(137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian site of the Three Gorge Region, China are estimated by the two models. The over-estimation of the soil loss by using the previous simple profile-shape model obviously increases with the time period from the sampling year to the year of 1963 and (137)Cs loss proportion of the reference inventory. As to 20-80% of (137)Cs loss proportions of the reference inventory at the Kaixian site in 2004, the annual soil loss depths estimated by the new simplified transport process model are only 57.90-56.24% of the values estimated by the previous model.
Per capita alcohol consumption and suicide mortality in a panel of US states from 1950 to 2002
Kerr, William C.; Subbaraman, Meenakshi; Ye, Yu
2011-01-01
Introduction and Aims The relationship between per capita alcohol consumption and suicide rates has been found to vary in significance and magnitude across countries. This study utilizes a panel of time-series measures from the US states to estimate the effects of changes in current and lagged alcohol sales on suicide mortality risk. Design and Methods Generalized least squares estimation utilized 53 years of data from 48 US states or state groups to estimate relationships between total and beverage-specific alcohol consumption measures and age-standardized suicide mortality rates in first-differenced semi-logged models. Results An additional liter of ethanol from total alcohol sales was estimated to increase suicide rates by 2.3% in models utilizing a distributed lag specification while no effect was found in models including only current alcohol consumption. A similar result is found for men, while for women both current and distributed lag measures were found to be significantly related to suicide rates with an effect of about 3.2% per liter from current and 5.8% per liter from the lagged measure. Beverage-specific models indicate that spirits is most closely linked with suicide risk for women while beer and wine are for men. Unemployment rates are consistently positively related to suicide rates. Discussion and Conclusions Results suggest that chronic effects, potentially related to alcohol abuse and dependence, are the main source of alcohol’s impact on suicide rates in the US for men and are responsible for about half of the effect for women. PMID:21896069
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
Modelling and assessment of accidental oil release from damaged subsea pipelines.
Li, Xinhong; Chen, Guoming; Zhu, Hongwei
2017-10-15
This paper develops a 3D, transient, mathematical model to estimate the oil release rate and simulate the oil dispersion behavior. The Euler-Euler method is used to estimate the subsea oil release rate, while the Eulerian-Lagrangian method is employed to track the migration trajectory of oil droplets. This model accounts for the quantitative effect of backpressure and hole size on oil release rate, and the influence of oil release rate, oil density, current speed, water depth and leakage position on oil migration is also investigated in this paper. Eventually, the results, e.g. transient release rate of oil, the rise time of oil and dispersion distance are determined by above-mentioned model, and the oil release and dispersion behavior under different scenarios is revealed. Essentially, the assessment results could provide a useful guidance for detection of leakage positon and placement of oil containment boom. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fourment, Mathieu; Holmes, Edward C
2014-07-24
Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called "uncorrelated relaxed clock" where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. We develop a maximum likelihood method--Physher--that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/.
2016-12-01
Simplified example of estimating metabolic rate from core temperature using the SCENARIO thermoregulatory model. 7 4 Edgewood training site, Day 1, core... temperature (TC) and metabolic rate (Ṁ). 10 5 Edgewood training site, Day 2, core temperature (TC) and metabolic rate (Ṁ). 11 6 Hayward...training site, Day 1, core temperature (TC) and metabolic rate (Ṁ). 12 7 Hayward training site, Day 2, core temperature (TC) and metabolic rate (Ṁ). 13
Traffic evacuation time under nonhomogeneous conditions.
Fazio, Joseph; Shetkar, Rohan; Mathew, Tom V
2017-06-01
During many manmade and natural crises such as terrorist threats, floods, hazardous chemical and gas leaks, emergency personnel need to estimate the time in which people can evacuate from the affected urban area. Knowing an estimated evacuation time for a given crisis, emergency personnel can plan and prepare accordingly with the understanding that the actual evacuation time will take longer. Given the urban area to be evacuated, street widths exiting the area's perimeter, the area's population density, average vehicle occupancy, transport mode share and crawl speed, an estimation of traffic evacuation time can be derived. Peak-hour traffic data collected at three, midblock, Mumbai sites of varying geometric features and traffic composition were used in calibrating a model that estimates peak-hour traffic flow rates. Model validation revealed a correlation coefficient of +0.98 between observed and predicted peak-hour flow rates. A methodology is developed that estimates traffic evacuation time using the model.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
NASA Astrophysics Data System (ADS)
van der Wal, Wouter; Wu, Patrick; Sideris, Michael G.; Shum, C. K.
2008-10-01
Monthly geopotential spherical harmonic coefficients from the GRACE satellite mission are used to determine their usefulness and limitations for studying glacial isostatic adjustment (GIA) in North-America. Secular gravity rates are estimated by unweighted least-squares estimation using release 4 coefficients from August 2002 to August 2007 provided by the Center for Space Research (CSR), University of Texas. Smoothing is required to suppress short wavelength noise, in addition to filtering to diminish geographically correlated errors, as shown in previous studies. Optimal cut-off degrees and orders are determined for the destriping filter to maximize the signal to noise ratio. The halfwidth of the Gaussian filter is shown to significantly affect the sensitivity of the GRACE data (with respect to upper mantle viscosity and ice loading history). Therefore, the halfwidth should be selected based on the desired sensitivity. It is shown that increase in water storage in an area south west of Hudson Bay, from the summer of 2003 to the summer of 2006, contributes up to half of the maximum estimated gravity rate. Hydrology models differ in the predictions of the secular change in water storage, therefore even 4-year trend estimates are influenced by the uncertainty in water storage changes. Land ice melting in Greenland and Alaska has a non-negligible contribution, up to one-fourth of the maximum gravity rate. The estimated secular gravity rate shows two distinct peaks that can possibly be due to two domes in the former Pleistocene ice cover: west and south east of Hudson Bay. With a limited number of models, a better fit is obtained with models that use the ICE-3G model compared to the ICE-5G model. However, the uncertainty in interannual variations in hydrology models is too large to constrain the ice loading history with the current data span. For future work in which GRACE will be used to constrain ice loading history and the Earth's radial viscosity profile, it is important to include realistic uncertainty estimates for hydrology models and land ice melting in addition to the effects of lateral heterogeneity.
Appendix H of KABAM Version 1.0 documentation related to estimating the metabolism rate constant. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2010 CFR
2010-01-01
... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...
Do fungi need to be included within environmental radiation protection assessment models?
Guillén, J; Baeza, A; Beresford, N A; Wood, M D
2017-09-01
Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
NASA Astrophysics Data System (ADS)
Lolli, Simone; Di Girolamo, Paolo; Demoz, Belay; Li, Xiaowen; Welton, Ellsworth J.
2018-04-01
Rain evaporation significantly contributes to moisture and heat cloud budgets. In this paper, we illustrate an approach to estimate the median volume raindrop diameter and the rain evaporation rate profiles from dual-wavelength lidar measurements. These observational results are compared with those provided by a model analytical solution. We made use of measurements from the multi-wavelength Raman lidar BASIL.
This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector
2018-01-01
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Ye, Yu; Kerr, William C
2011-01-01
To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.
NASA Astrophysics Data System (ADS)
Tucker, G. E.; McCoy, S. W.; Whittaker, A. C.; Roberts, G.; Lancaster, S. T.; Phillips, R. J.
2011-12-01
The existence of well-preserved Holocene bedrock fault scarps along active normal faults in the Mediterranean region and elsewhere suggests a dramatic reduction in rates of rock weathering and erosion that correlates with the transition from glacial to interglacial climate. We test and quantify this interpretation using a case study in the Italian Central Apennines. Holocene rates are derived from measurements of weathering-pit depth along the Magnola scarp, where previous cosmogenic 36Cl analyses constrain exposure history. To estimate the average hillslope erosion rate over ˜105 years, we introduce a simple geometric model of normal-fault footwall slope evolution. The model predicts that the gradient of a weathering-limited footwall hillslope is set by fault dip angle and by the ratio of slip rate to erosion rate; if either slip or erosion rate is known, the other can be derived. Applying this model to the Magnola fault yields an estimated average weathering rate on the order of 0.2-0.4 mm/yr, more than 10x higher than either the Holocene scarp weathering rate or modern regional limestone weathering rates. A numerical model of footwall growth and erosion, in which erosion rate tracks the oxygen-isotope curve, reproduces the main features of hillslope and scarp morphology and suggests that the hillslope erosion rate has varied by about a factor of 30 over the past one to two glacial cycles. We conclude that preservation of carbonate fault scarps reflects strong climatic control on rock breakdown by frost cracking.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Demographic estimation methods for plants with unobservable life-states
Kery, M.; Gregg, K.B.; Schaub, M.
2005-01-01
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous computer algebra methods to identify the parameters that are estimable in principle. As life-states are a prominent feature in plant life cycles, multi state capture-recapture models are a natural framework for analysing population dynamics of plants with dormancy.
Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.
Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi
2018-03-15
Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.
Chan, King-Pan; Chan, Kwok-Hung; Wong, Wilfred Hing-Sang; Peiris, J. S. Malik; Wong, Chit-Ming
2011-01-01
Background Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV), parainfluenza and adenovirus. Methods and Findings We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. Conclusion The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong. PMID:21412433
Modelled and field measurements of biogenic hydrocarbon emissions from a Canadian deciduous forest
NASA Astrophysics Data System (ADS)
Fuentes, J. D.; Wang, D.; Den Hartog, G.; Neumann, H. H.; Dann, T. F.; Puckett, K. J.
The Biogenic Emission Inventory System (BEIS) used by the United States Environmental Protection Agency (Lamb et al., 1993, Atmospheric Environment21, 1695-1705; Pierce and Waldruff, 1991, J. Air Waste Man. Ass.41, 937-941) was tested for its ability to provide realistic microclimate descriptions within a deciduous forest in Canada. The microclimate description within plant canopies is required because isoprene emission rates from plants are strongly influenced by foliage temperature and photosynthetically active radiation impinging on leaves while monoterpene emissions depend primarily on leaf temperature. Model microclimate results combined with plant emission rates and local biomass distribution were used to derive isoprene and α-pinene emissions from the deciduous forest canopy. In addition, modelled isoprene emission estimates were compared to measured emission rates at the leaf level. The current model formulation provides realistic microclimatic conditions for the forest crown where modelled and measured air and foliage temperature are within 3°C. However, the model provides inadequate microclimate characterizations in the lower canopy where estimated and measured foliage temperatures differ by as much as 10°C. This poor agreement may be partly due to improper model characterization of relative humidity and ambient temperature within the canopy. These uncertainties in estimated foliage temperature can lead to underestimates of hydrocarbon emission estimates of two-fold. Moreover, the model overestimates hydrocarbon emissions during the early part of the growing season and underestimates emissions during the middle and latter part of the growing season. These emission uncertainties arise because of the assumed constant biomass distribution of the forest and constant hydrocarbon emission rates throughout the season. The BEIS model, which is presently used in Canada to estimate inventories of hydrocarbon emissions from vegetation, underestimates emission rates by at least two-fold compared to emissions derived from field measurements. The isoprene emission algorithm proposed by Guenther et al. (1993), applied at the leaf level, provides relatively good agreement compared to measurements. Field measurements indicate that isoprene emissions change with leaf ontogeny and differ amongst tree species. Emission rates defined as function of foliage development stage and plant species need to be introduced in the hydrocarbon emission algorithms. Extensive model evaluation and more hydrocarbon emission measurement;: from different plant species are required to fully assess the appropriateness of this emission calculation approach for Canadian forests.
Recharge and groundwater models: An overview
Sanford, W.
2002-01-01
Recharge is a fundamental component of groundwater systems, and in groundwater-modeling exercises recharge is either measured and specified or estimated during model calibration. The most appropriate way to represent recharge in a groundwater model depends upon both physical factors and study objectives. Where the water table is close to the land surface, as in humid climates or regions with low topographic relief, a constant-head boundary condition is used. Conversely, where the water table is relatively deep, as in drier climates or regions with high relief, a specified-flux boundary condition is used. In most modeling applications, mixed-type conditions are more effective, or a combination of the different types can be used. The relative distribution of recharge can be estimated from water-level data only, but flux observations must be incorporated in order to estimate rates of recharge. Flux measurements are based on either Darcian velocities (e.g., stream base-flow) or seepage velocities (e.g., groundwater age). In order to estimate the effective porosity independently, both types of flux measurements must be available. Recharge is often estimated more efficiently when automated inverse techniques are used. Other important applications are the delineation of areas contributing recharge to wells and the estimation of paleorecharge rates using carbon-14.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
NASA Astrophysics Data System (ADS)
Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.
2016-06-01
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.
Lifetime earnings for physicians across specialties.
Leigh, J Paul; Tancredi, Daniel; Jerant, Anthony; Romano, Patrick S; Kravitz, Richard L
2012-12-01
Earlier studies estimated annual income differences across specialties, but lifetime income may be more relevant given physicians' long-term commitments to specialties. Annual income and work hours data were collected from 6381 physicians in the nationally representative 2004-2005 Community Tracking Study. Data regarding years of residency were collected from AMA FREIDA. Present value models were constructed assuming 3% discount rates. Estimates were adjusted for demographic and market covariates. Sensitivity analyses included 4 alternative models involving work hours, retirement, exogenous variables, and 1% discount rate. Estimates were generated for 4 broad specialty categories (Primary Care, Surgery, Internal Medicine and Pediatric Subspecialties, and Other), and for 41 specific specialties. The estimates of lifetime earnings for the broad categories of Surgery, Internal Medicine and Pediatric Subspecialties, and Other specialties were $1,587,722, $1,099,655, and $761,402 more than for Primary Care. For the 41 specific specialties, the top 3 (with family medicine as reference) were neurological surgery ($2,880,601), medical oncology ($2,772,665), and radiation oncology ($2,659,657). The estimates from models with varying rates of retirement and including only exogenous variables were similar to those in the preferred model. The 1% discount model generated estimates that were roughly 150% larger than the 3% model. There was considerable variation in the lifetime earnings across physician specialties. After accounting for varying residency years and discounting future earnings, primary care specialties earned roughly $1-3 million less than other specialties. Earnings' differences across specialties may undermine health reform efforts to control costs and ensure adequate numbers of primary care physicians.
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Wallin, Ragnar; Boyle, Richard D.
2013-01-01
The vestibulo-ocular reflex (VOR) is a well-known dual mode bifurcating system that consists of slow and fast modes associated with nystagmus and saccade, respectively. Estimation of continuous-time parameters of nystagmus and saccade models are known to be sensitive to estimation methodology, noise and sampling rate. The stable and accurate estimation of these parameters are critical for accurate disease modelling, clinical diagnosis, robotic control strategies, mission planning for space exploration and pilot safety, etc. This paper presents a novel indirect system identification method for the estimation of continuous-time parameters of VOR employing standardised least-squares with dual sampling rates in a sparse structure. This approach permits the stable and simultaneous estimation of both nystagmus and saccade data. The efficacy of this approach is demonstrated via simulation of a continuous-time model of VOR with typical parameters found in clinical studies and in the presence of output additive noise.
Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.
Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.
The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less
Friesen, Melissa C.; Wheeler, David C.; Vermeulen, Roel; Locke, Sarah J.; Zaebst, Dennis D.; Koutros, Stella; Pronk, Anjoeka; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Malats, Nuria; Schwenn, Molly; Johnson, Alison; Armenti, Karla R.; Rothman, Nathanial; Stewart, Patricia A.; Kogevinas, Manolis; Silverman, Debra T.
2016-01-01
Objectives: To efficiently and reproducibly assess occupational diesel exhaust exposure in a Spanish case-control study, we examined the utility of applying decision rules that had been extracted from expert estimates and questionnaire response patterns using classification tree (CT) models from a similar US study. Methods: First, previously extracted CT decision rules were used to obtain initial ordinal (0–3) estimates of the probability, intensity, and frequency of occupational exposure to diesel exhaust for the 10 182 jobs reported in a Spanish case-control study of bladder cancer. Second, two experts reviewed the CT estimates for 350 jobs randomly selected from strata based on each CT rule’s agreement with the expert ratings in the original study [agreement rate, from 0 (no agreement) to 1 (perfect agreement)]. Their agreement with each other and with the CT estimates was calculated using weighted kappa (κ w) and guided our choice of jobs for subsequent expert review. Third, an expert review comprised all jobs with lower confidence (low-to-moderate agreement rates or discordant assignments, n = 931) and a subset of jobs with a moderate to high CT probability rating and with moderately high agreement rates (n = 511). Logistic regression was used to examine the likelihood that an expert provided a different estimate than the CT estimate based on the CT rule agreement rates, the CT ordinal rating, and the availability of a module with diesel-related questions. Results: Agreement between estimates made by two experts and between estimates made by each of the experts and the CT estimates was very high for jobs with estimates that were determined by rules with high CT agreement rates (κ w: 0.81–0.90). For jobs with estimates based on rules with lower agreement rates, moderate agreement was observed between the two experts (κ w: 0.42–0.67) and poor-to-moderate agreement was observed between the experts and the CT estimates (κ w: 0.09–0.57). In total, the expert review of 1442 jobs changed 156 probability estimates, 128 intensity estimates, and 614 frequency estimates. The expert was more likely to provide a different estimate when the CT rule agreement rate was <0.8, when the CT ordinal ratings were low to moderate, or when a module with diesel questions was available. Conclusions: Our reliability assessment provided important insight into where to prioritize additional expert review; as a result, only 14% of the jobs underwent expert review, substantially reducing the exposure assessment burden. Overall, we found that we could efficiently, reproducibly, and reliably apply CT decision rules from one study to assess exposure in another study. PMID:26732820
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
Sawaya, Michael A; Stetz, Jeffrey B; Clevenger, Anthony P; Gibeau, Michael L; Kalinowski, Steven T
2012-01-01
We evaluated the potential of two noninvasive genetic sampling methods, hair traps and bear rub surveys, to estimate population abundance and trend of grizzly (Ursus arctos) and black bear (U. americanus) populations in Banff National Park, Alberta, Canada. Using Huggins closed population mark-recapture models, we obtained the first precise abundance estimates for grizzly bears (N= 73.5, 95% CI = 64-94 in 2006; N= 50.4, 95% CI = 49-59 in 2008) and black bears (N= 62.6, 95% CI = 51-89 in 2006; N= 81.8, 95% CI = 72-102 in 2008) in the Bow Valley. Hair traps had high detection rates for female grizzlies, and male and female black bears, but extremely low detection rates for male grizzlies. Conversely, bear rubs had high detection rates for male and female grizzlies, but low rates for black bears. We estimated realized population growth rates, lambda, for grizzly bear males (λ= 0.93, 95% CI = 0.74-1.17) and females (λ= 0.90, 95% CI = 0.67-1.20) using Pradel open population models with three years of bear rub data. Lambda estimates are supported by abundance estimates from combined hair trap/bear rub closed population models and are consistent with a system that is likely driven by high levels of human-caused mortality. Our results suggest that bear rub surveys would provide an efficient and powerful means to inventory and monitor grizzly bear populations in the Central Canadian Rocky Mountains.
Sawaya, Michael A.; Stetz, Jeffrey B.; Clevenger, Anthony P.; Gibeau, Michael L.; Kalinowski, Steven T.
2012-01-01
We evaluated the potential of two noninvasive genetic sampling methods, hair traps and bear rub surveys, to estimate population abundance and trend of grizzly (Ursus arctos) and black bear (U. americanus) populations in Banff National Park, Alberta, Canada. Using Huggins closed population mark-recapture models, we obtained the first precise abundance estimates for grizzly bears ( = 73.5, 95% CI = 64–94 in 2006; = 50.4, 95% CI = 49–59 in 2008) and black bears ( = 62.6, 95% CI = 51–89 in 2006; = 81.8, 95% CI = 72–102 in 2008) in the Bow Valley. Hair traps had high detection rates for female grizzlies, and male and female black bears, but extremely low detection rates for male grizzlies. Conversely, bear rubs had high detection rates for male and female grizzlies, but low rates for black bears. We estimated realized population growth rates, lambda, for grizzly bear males ( = 0.93, 95% CI = 0.74–1.17) and females ( = 0.90, 95% CI = 0.67–1.20) using Pradel open population models with three years of bear rub data. Lambda estimates are supported by abundance estimates from combined hair trap/bear rub closed population models and are consistent with a system that is likely driven by high levels of human-caused mortality. Our results suggest that bear rub surveys would provide an efficient and powerful means to inventory and monitor grizzly bear populations in the Central Canadian Rocky Mountains. PMID:22567089
Nishiura, Hiroshi; Chowell, Gerardo; Safan, Muntaser; Castillo-Chavez, Carlos
2010-01-07
In many parts of the world, the exponential growth rate of infections during the initial epidemic phase has been used to make statistical inferences on the reproduction number, R, a summary measure of the transmission potential for the novel influenza A (H1N1) 2009. The growth rate at the initial stage of the epidemic in Japan led to estimates for R in the range 2.0 to 2.6, capturing the intensity of the initial outbreak among school-age children in May 2009. An updated estimate of R that takes into account the epidemic data from 29 May to 14 July is provided. An age-structured renewal process is employed to capture the age-dependent transmission dynamics, jointly estimating the reproduction number, the age-dependent susceptibility and the relative contribution of imported cases to secondary transmission. Pitfalls in estimating epidemic growth rates are identified and used for scrutinizing and re-assessing the results of our earlier estimate of R. Maximum likelihood estimates of R using the data from 29 May to 14 July ranged from 1.21 to 1.35. The next-generation matrix, based on our age-structured model, predicts that only 17.5% of the population will experience infection by the end of the first pandemic wave. Our earlier estimate of R did not fully capture the population-wide epidemic in quantifying the next-generation matrix from the estimated growth rate during the initial stage of the pandemic in Japan. In order to quantify R from the growth rate of cases, it is essential that the selected model captures the underlying transmission dynamics embedded in the data. Exploring additional epidemiological information will be useful for assessing the temporal dynamics. Although the simple concept of R is more easily grasped by the general public than that of the next-generation matrix, the matrix incorporating detailed information (e.g., age-specificity) is essential for reducing the levels of uncertainty in predictions and for assisting public health policymaking. Model-based prediction and policymaking are best described by sharing fundamental notions of heterogeneous risks of infection and death with non-experts to avoid potential confusion and/or possible misuse of modelling results.
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene
2016-04-30
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Glaze, L. S.; Baloga, S. M.; Garvin, J. B.; Quick, L. C.
2014-05-01
Lava flows and flow fields on Venus lack sufficient topographic data for any type of quantitative modeling to estimate eruption rates and durations. Such modeling can constrain rates of resurfacing and provide insights into magma plumbing systems.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Estimation and identification study for flexible vehicles
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Englar, T. S., Jr.
1973-01-01
Techniques are studied for the estimation of rigid body and bending states and the identification of model parameters associated with the single-axis attitude dynamics of a flexible vehicle. This problem is highly nonlinear but completely observable provided sufficient attitude and attitude rate data is available and provided all system bending modes are excited in the observation interval. A sequential estimator tracks the system states in the presence of model parameter errors. A batch estimator identifies all model parameters with high accuracy.
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Study of the Education Rate of Return for Rural Residents in Northwest Minority Areas
ERIC Educational Resources Information Center
Baicai, Sun
2015-01-01
Studying the rate of return for rural residents in minority areas can give a good explanation for the problems of children enrolling in school and dropping out. On the whole, the education rate of return for rural residents in northwest minority areas is low, the Mincer function estimate is 2%, and the Heckman model estimate is 2.49%. The rate of…
NASA Astrophysics Data System (ADS)
Anderson, M.; Bennett, R.; Matti, J.
2004-12-01
Existing geodetic, geomorphic, and geologic studies yield apparently conflicting estimates of fault displacement rates over the last 1.5 m.y. in the greater San Andreas fault (SAF) system of southern California. Do these differences reflect biases in one or more of the inference methods, or is fault displacement really temporally variable? Arguments have been presented for both cases. We investigate the plausibility of variable-rate fault models by combining basin deposit provenance, fault trenching, seismicity, gravity, and magnetic data sets from the San Bernardino basin. These data allow us to trace the path and broad timing of strike-slip fault displacements in buried basement rocks, which in turn allows us to test weather variable-fault rate models fit the displacement path and rate data through the basin. The San Bernardino basin lies between the San Jacinto fault (SJF) and the SAF. Isostatic gravity signatures show a 2 km deep graben centered directly over the modern strand of the SJF, whereas the basin is shallow and a-symmetric next to the SAF. This observation indicates that stresses necessary to create the basin have been centered on the SJF for most of the basin's history. Linear magnetic anomalies, used as geologic markers, are offset ˜25 km across the northernmost strands of the SJF, which matches offset estimations south of the basin. These offset anomalies indicate that the SJF and SAF are discrete fault systems that do not directly interact south of the San Gabriel Mountains, therefore spatial slip variability combined with sparse sampling cannot explain the conflicting rate data. Furthermore, analyses of basin deposits indicate that movement on the SJF began between 1.3 to1.5 Ma, yielding an over-all average displacement rate in the range of 17 to 19 mm/yr, which is higher than some shorter-term estimates based on geodesy and geomorphology. Average displacement rates over this same time period for the San Bernardino strand of the SAF, on the other hand, are inferred to be low, consistent with some recent short-term estimates based on geodesy, but in contrast with estimates based on geomorphology. We conclude that either published estimates for the short-term SJF displacement rate do not accurately reflect the full SJF rate, or that the SJF rate has decreased over time, with implications for rate changes on other faults in the region. We explore the latter explanation with models for time-variable displacement rate for the greater SAF system that satisfy all existing data.
Schwindt, Adam R; Winkelman, Dana L
2016-09-01
Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L(-1) and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.
NASA Astrophysics Data System (ADS)
Nanteza, J.; Thomas, B. F.; Mukwaya, P. I.
2017-12-01
The general lack of knowledge about the current rates of water abstraction/use is a challenge to sustainable water resources management in many countries, including Uganda. Estimates of water abstraction/use rates over Uganda, currently available from the FAO are not disaggregated according to source, making it difficult to understand how much is taken out of individual water stores, limiting effective management. Modelling efforts have disaggregated water use rates according to source (i.e. groundwater and surface water). However, over Sub-Saharan Africa countries, these model use estimates are highly uncertain given the scale limitations in applying water use (i.e. point versus regional), thus influencing model calibration/validation. In this study, we utilize data from the water supply atlas project over Uganda to estimate current rates of groundwater abstraction across the country based on location, well type and other relevant information. GIS techniques are employed to demarcate areas served by each water source. These areas are combined with past population distributions and average daily water needed per person to estimate water abstraction/use through time. The results indicate an increase in groundwater use, and isolate regions prone to groundwater depletion where improved management is required to sustainably management groundwater use.
Redefinition and global estimation of basal ecosystem respiration rate
NASA Astrophysics Data System (ADS)
Yuan, Wenping; Luo, Yiqi; Li, Xianglan; Liu, Shuguang; Yu, Guirui; Zhou, Tao; Bahn, Michael; Black, Andy; Desai, Ankur R.; Cescatti, Alessandro; Marcolla, Barbara; Jacobs, Cor; Chen, Jiquan; Aurela, Mika; Bernhofer, Christian; Gielen, Bert; Bohrer, Gil; Cook, David R.; Dragoni, Danilo; Dunn, Allison L.; Gianelle, Damiano; Grünwald, Thomas; Ibrom, Andreas; Leclerc, Monique Y.; Lindroth, Anders; Liu, Heping; Marchesini, Luca Belelli; Montagnani, Leonardo; Pita, Gabriel; Rodeghiero, Mirco; Rodrigues, Abel; Starr, Gregory; Stoy, Paul C.
2011-12-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ˜3°S to ˜70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr -1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.
The Effects of Population Density on Juvenile Growth Rate in White-Tailed Deer
NASA Astrophysics Data System (ADS)
Barr, Brannon; Wolverton, Steve
2014-10-01
Animal body size is driven by habitat quality, food availability, and nutrition. Adult size can relate to birth weight, to length of the ontogenetic growth period, and/or to the rate of growth. Data requirements are high for studying these growth mechanisms, but large datasets exist for some game species. In North America, large harvest datasets exist for white-tailed deer ( Odocoileus virginianus), but such data are collected under a variety of conditions and are generally dismissed for ecological research beyond local population and habitat management. We contend that such data are useful for studying the ecology of white-tailed deer growth and body size when analyzed at ordinal scale. In this paper, we test the response of growth rate to food availability by fitting a logarithmic equation that estimates growth rate only to harvest data from Fort Hood, Texas, and track changes in growth rate over time. Results of this ordinal scale model are compared to previously published models that include additional parameters, such as birth weight and adult weight. It is shown that body size responds to food availability by variation in growth rate. Models that estimate multiple parameters may not work with harvest data because they are prone to error, which renders estimates from complex models too variable to detect interannual changes in growth rate that this ordinal scale model captures. This model can be applied to harvest data, from which inferences about factors that influence animal growth and body size (e.g., habitat quality and nutritional availability) can be drawn.
The effects of population density on juvenile growth rate in white-tailed deer.
Barr, Brannon; Wolverton, Steve
2014-10-01
Animal body size is driven by habitat quality, food availability, and nutrition. Adult size can relate to birth weight, to length of the ontogenetic growth period, and/or to the rate of growth. Data requirements are high for studying these growth mechanisms, but large datasets exist for some game species. In North America, large harvest datasets exist for white-tailed deer (Odocoileus virginianus), but such data are collected under a variety of conditions and are generally dismissed for ecological research beyond local population and habitat management. We contend that such data are useful for studying the ecology of white-tailed deer growth and body size when analyzed at ordinal scale. In this paper, we test the response of growth rate to food availability by fitting a logarithmic equation that estimates growth rate only to harvest data from Fort Hood, Texas, and track changes in growth rate over time. Results of this ordinal scale model are compared to previously published models that include additional parameters, such as birth weight and adult weight. It is shown that body size responds to food availability by variation in growth rate. Models that estimate multiple parameters may not work with harvest data because they are prone to error, which renders estimates from complex models too variable to detect interannual changes in growth rate that this ordinal scale model captures. This model can be applied to harvest data, from which inferences about factors that influence animal growth and body size (e.g., habitat quality and nutritional availability) can be drawn.
Redefinition and global estimation of basal ecosystem respiration rate
Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, Paul C.
2011-01-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.
Choi, Chang-Yong; Lee, Ki-Sup; Poyarkov, Nikolay D.; Park, Jin-Young; Lee, Hansoo; Takekawa, John Y.; Smith, Lacy M.; Ely, Craig R.; Wang, Xin; Cao, Lei; Fox, Anthony D.; Goroshko, Oleg; Batbayar, Nyambayar; Prosser, Diann J.; Xiao, Xiangming
2016-01-01
Waterbird survival rates are a key component of demographic modeling used for effective conservation of long-lived threatened species. The Swan Goose (Anser cygnoides) is globally threatened and the most vulnerable goose species endemic to East Asia due to its small and rapidly declining population. To address a current knowledge gap in demographic parameters of the Swan Goose, available datasets were compiled from neck-collar resighting and telemetry studies, and two different models were used to estimate their survival rates. Results of a mark-resighting model using 15 years of neck-collar data (2001–2015) provided age-dependent survival rates and season-dependent encounter rates with a constant neck-collar retention rate. Annual survival rate was 0.638 (95% CI: 0.378–0.803) for adults and 0.122 (95% CI: 0.028–0.286) for first-year juveniles. Known-fate models were applied to the single season of telemetry data (autumn 2014) and estimated a mean annual survival rate of 0.408 (95% CI: 0.152–0.670) with higher but non-significant differences for adults (0.477) vs. juveniles (0.306). Our findings indicate that Swan Goose survival rates are comparable to the lowest rates reported for European or North American goose species. Poor survival may be a key demographic parameter contributing to their declining trend. Quantitative threat assessments and associated conservation measures, such as restricting hunting, may be a key step to mitigate for their low survival rates and maintain or enhance their population.
Modeling emission rates and exposures from outdoor cooking
NASA Astrophysics Data System (ADS)
Edwards, Rufus; Princevac, Marko; Weltman, Robert; Ghasemian, Masoud; Arora, Narendra K.; Bond, Tami
2017-09-01
Approximately 3 billion individuals rely on solid fuels for cooking globally. For a large portion of these - an estimated 533 million - cooking is outdoors, where emissions from cookstoves pose a health risk to both cooks and other household and village members. Models that estimate emissions rates from stoves in indoor environments that would meet WHO air quality guidelines (AQG), explicitly don't account for outdoor cooking. The objectives of this paper are to link health based exposure guidelines with emissions from outdoor cookstoves, using a Monte Carlo simulation of cooking times from Haryana India coupled with inverse Gaussian dispersion models. Mean emission rates for outdoor cooking that would result in incremental increases in personal exposure equivalent to the WHO AQG during a 24-h period were 126 ± 13 mg/min for cooking while squatting and 99 ± 10 mg/min while standing. Emission rates modeled for outdoor cooking are substantially higher than emission rates for indoor cooking to meet AQG, because the models estimate impact of emissions on personal exposure concentrations rather than microenvironment concentrations, and because the smoke disperses more readily outdoors compared to indoor environments. As a result, many more stoves including the best performing solid-fuel biomass stoves would meet AQG when cooking outdoors, but may also result in substantial localized neighborhood pollution depending on housing density. Inclusion of the neighborhood impact of pollution should be addressed more formally both in guidelines on emissions rates from stoves that would be protective of health, and also in wider health impact evaluation efforts and burden of disease estimates. Emissions guidelines should better represent the different contexts in which stoves are being used, especially because in these contexts the best performing solid fuel stoves have the potential to provide significant benefits.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Incidence and admission rates for severe malaria and their impact on mortality in Africa.
Camponovo, Flavia; Bever, Caitlin A; Galactionova, Katya; Smith, Thomas; Penny, Melissa A
2017-01-03
Appropriate treatment of life-threatening Plasmodium falciparum malaria requires in-patient care. Although the proportion of severe cases accessing in-patient care in endemic settings strongly affects overall case fatality rates and thus disease burden, this proportion is generally unknown. At present, estimates of malaria mortality are driven by prevalence or overall clinical incidence data, ignoring differences in case fatality resulting from variations in access. Consequently, the overall impact of preventive interventions on disease burden have not been validly compared with those of improvements in access to case management or its quality. Using a simulation-based approach, severe malaria admission rates and the subsequent severe malaria disease and mortality rates for 41 malaria endemic countries of sub-Saharan Africa were estimated. Country differences in transmission and health care settings were captured by use of high spatial resolution data on demographics and falciparum malaria prevalence, as well as national level estimates of effective coverage of treatment for uncomplicated malaria. Reported and modelled estimates of cases, admissions and malaria deaths from the World Malaria Report, along with predicted burden from simulations, were combined to provide revised estimates of access to in-patient care and case fatality rates. There is substantial variation between countries' in-patient admission rates and estimated levels of case fatality rates. It was found that for many African countries, most patients admitted for in-patient treatment would not meet strict criteria for severe disease and that for some countries only a small proportion of the total severe cases are admitted. Estimates are highly sensitive to the assumed community case fatality rates. Re-estimation of national level malaria mortality rates suggests that there is substantial burden attributable to inefficient in-patient access and treatment of severe disease. The model-based methods proposed here offer a standardized approach to estimate the numbers of severe malaria cases and deaths based on national level reporting, allowing for coverage of both curative and preventive interventions. This makes possible direct comparisons of the potential benefits of scaling-up either category of interventions. The profound uncertainties around these estimates highlight the need for better data.
A semiparametric separation curve approach for comparing correlated ROC data from multiple markers
Tang, Liansheng Larry; Zhou, Xiao-Hua
2012-01-01
In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360
Is the Rational Addiction model inherently impossible to estimate?
Laporte, Audrey; Dass, Adrian Rohit; Ferguson, Brian S
2017-07-01
The Rational Addiction (RA) model is increasingly often estimated using individual level panel data with mixed results; in particular, with regard to the implied rate of time discount. This paper suggests that the odd values of the rate of discount frequently found in the literature may in fact be a consequence of the saddle-point dynamics associated with individual level inter-temporal optimization problems. We report the results of Monte Carlo experiments estimating RA-type difference equations that seem to suggest the possibility that the presence of both a stable and an unstable root in the dynamic process may create serious problems for the estimation of RA equations. Copyright © 2016 Elsevier B.V. All rights reserved.
Covariates of the Rating Process in Hierarchical Models for Multiple Ratings of Test Items
ERIC Educational Resources Information Center
Mariano, Louis T.; Junker, Brian W.
2007-01-01
When constructed response test items are scored by more than one rater, the repeated ratings allow for the consideration of individual rater bias and variability in estimating student proficiency. Several hierarchical models based on item response theory have been introduced to model such effects. In this article, the authors demonstrate how these…
Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0
The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.
Estimating site occupancy rates when detection probabilities are less than one
MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.
2002-01-01
Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.
Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc
2014-01-01
The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
Evaluation of coral reef carbonate production models at a global scale
NASA Astrophysics Data System (ADS)
Jones, N. S.; Ridgwell, A.; Hendy, E. J.
2014-09-01
Calcification by coral reef communities is estimated to account for half of all carbonate produced in shallow water environments and more than 25% of the total carbonate buried in marine sediments globally. Production of calcium carbonate by coral reefs is therefore an important component of the global carbon cycle. It is also threatened by future global warming and other global change pressures. Numerical models of reefal carbonate production are essential for understanding how carbonate deposition responds to environmental conditions including future atmospheric CO2 concentrations, but these models must first be evaluated in terms of their skill in recreating present day calcification rates. Here we evaluate four published model descriptions of reef carbonate production in terms of their predictive power, at both local and global scales, by comparing carbonate budget outputs with independent estimates. We also compile available global data on reef calcification to produce an observation-based dataset for the model evaluation. The four calcification models are based on functions sensitive to combinations of light availability, aragonite saturation (Ωa) and temperature and were implemented within a specifically-developed global framework, the Global Reef Accretion Model (GRAM). None of the four models correlated with independent rate estimates of whole reef calcification. The temperature-only based approach was the only model output to significantly correlate with coral-calcification rate observations. The absence of any predictive power for whole reef systems, even when consistent at the scale of individual corals, points to the overriding importance of coral cover estimates in the calculations. Our work highlights the need for an ecosystem modeling approach, accounting for population dynamics in terms of mortality and recruitment and hence coral cover, in estimating global reef carbonate budgets. In addition, validation of reef carbonate budgets is severely hampered by limited and inconsistent methodology in reef-scale observations.
An administrative claims model for profiling hospital 30-day mortality rates for pneumonia patients.
Bratzler, Dale W; Normand, Sharon-Lise T; Wang, Yun; O'Donnell, Walter J; Metersky, Mark; Han, Lein F; Rapp, Michael T; Krumholz, Harlan M
2011-04-12
Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998-2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998-2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25(th), 50(th), and 75(th) percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model.
An Administrative Claims Model for Profiling Hospital 30-Day Mortality Rates for Pneumonia Patients
Bratzler, Dale W.; Normand, Sharon-Lise T.; Wang, Yun; O'Donnell, Walter J.; Metersky, Mark; Han, Lein F.; Rapp, Michael T.; Krumholz, Harlan M.
2011-01-01
Background Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Methodology/Principal Findings Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998–2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998–2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25th, 50th, and 75th percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). Conclusions/Significance An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model. PMID:21532758
Levy, David T.; Hyland, Andrew; Higbee, Cheryl; Remer, Lillian; Compton, Christine
2009-01-01
Summary Tobacco control policies are examined utilizing a simulation model for California, the state with the longest running comprehensive program. We assess the impact of the California Tobacco Control Program (CTCP) and surrounding price changes on smoking prevalence and smoking-attributable deaths. Modeling begins in 1988 and progresses chronologically to 2004, and considers four types of policies (taxes, mass media, clean air laws, and youth access policies) independently and as a package. The model is validated against existing smoking prevalence estimates. The difference in trends between predicted smoking rates from the model and other commonly used estimates of smoking prevalence for the overall period were generally small. The model also predicted some important changes in trend, which occurred with changes in policy. The California SimSmoke model estimates that tobacco control policies reduced smoking rates in California by an additional 25% relative to the level that they would have been if policies were kept at their 1988 level. By 2004, the model attributes over 60% of the reduction to price increases, over 25% of the overall effect to media policies, 10% to clean air laws, and only a small percent to youth access policies. The model estimates that over 5,000 lives will be saved in the year 2010 alone as a result of the CTCP and industry-initiated price increases, and that over 50,000 lives were saved over the period 1988-2010. Tobacco control policies implemented as comprehensive tobacco control strategies have significantly impacted smoking rates. Further tax increases should lead to additional lives saved, and additional policies may result in further impacts on smoking rates, and consequently on smoking-attributable health outcomes in the population. PMID:17055104
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
NASA Technical Reports Server (NTRS)
Neigh, Christopher S. R.; Masek, Jeffrey G.; Bourget, Paul; Rishmawi, Khaldoun; Zhao, Feng; Huang, Chengquan; Cook, Bruce D.; Nelson, Ross
2015-01-01
Forests of the Contiguous United States (CONUS) have been found to be a large contributor to the global atmospheric carbon sink. The magnitude and nature of this sink is still uncertain and recent studies have sought to define the dynamics that control its strength and longevity. The Landsat series of satellites has been a vital resource to understand the long-term changes in land cover that can impact ecosystem function and terrestrial carbonstock. We combine annual Landsat forest disturbance history from 1985 to 2011 with single date IKONOS stereoimagery to estimate the change in young forest canopy height and above ground live dry biomass accumulation for selected sites in the CONUS. Our approach follows an approximately linear growth rate following clearing over short intervals and does not estimate the distinct non-linear growth rate over longer intervals.We produced canopy height models by differencing digital surface models estimated from IKONOS stereo pairs with national elevation data (NED). Correlations between height and biomass were established independently using airborne LiDAR, and then applied to the IKONOS-estimated canopy height models. Graphing current biomass against time since disturbance provided biomass accumulation rates. For 20 study sites distributed across five regions of the CONUS, 19 showed statistically significant recovery trends (p is less than 0.001) with canopy growth from 0.26 m yr-1to 0.73 m yr-1. Above ground live dry biomass (AGB) density accumulation ranged from 1.31 t/ha yr-1 to 12.47 t/ha yr-1. Mean forest AGB accumulationwas 6.31 t/ha yr-1 among all sites with significant growth trends. We evaluated the accuracy of our estimates by comparing to field estimated site index curves of growth, airborne LiDAR data, and independent model predictions of C accumulation. Growth estimates found with this approach are consistent with site index curves and total biomass estimates fall within the range of field estimates. This is aviable approach to estimate forest biomass accumulation in regions with clear-cut harvest disturbances.
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.
2015-10-01
To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.
Porto, Paolo; Walling, Des E
2012-10-01
Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Per Capita Alcohol Consumption and Suicide Rates in the U.S., 1950-2002
ERIC Educational Resources Information Center
Landberg, Jonas
2009-01-01
The aim of this paper was to estimate how suicide rates in the United States are affected by changes in per capita consumption during the postwar period. The analysis included Annual suicide rates and per capita alcohol consumption data (total and beverage specific) for the period 1950-2002. Gender- and age-specific models were estimated using the…
Binbing Yu; Tiwari, Ram C; Feuer, Eric J
2011-06-01
Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.
Cataife, Guido
2014-03-01
We propose the use of previously developed small area estimation techniques to monitor obesity and dietary habits in developing countries and apply the model to Rio de Janeiro city. We estimate obesity prevalence rates at the Census Tract through a combinatorial optimization spatial microsimulation model that matches body mass index and socio-demographic data in Brazil's 2008-9 family expenditure survey with Census 2010 socio-demographic data. Obesity ranges from 8% to 25% in most areas and affects the poor almost as much as the rich. Male and female obesity rates are uncorrelated at the small area level. The model is an effective tool to understand the complexity of the problem and to aid in policy design. © 2013 Published by Elsevier Ltd.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
NASA Astrophysics Data System (ADS)
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
Rotella, J.J.; Link, W.A.; Nichols, J.D.; Hadley, G.L.; Garrott, R.A.; Proffitt, K.M.
2009-01-01
Much of the existing literature that evaluates the roles of density-dependent and density-independent factors on population dynamics has been called into question in recent years because measurement errors were not properly dealt with in analyses. Using state-space models to account for measurement errors, we evaluated a set of competing models for a 22-year time series of mark-resight estimates of abundance for a breeding population of female Weddell seals (Leptonychotes weddellii) studied in Erebus Bay, Antarctica. We tested for evidence of direct density dependence in growth rates and evaluated whether equilibrium population size was related to seasonal sea-ice extent and the Southern Oscillation Index (SOI). We found strong evidence of negative density dependence in annual growth rates for a population whose estimated size ranged from 438 to 623 females during the study. Based on Bayes factors, a density-dependence-only model was favored over models that also included en! vironmental covariates. According to the favored model, the population had a stationary distribution with a mean of 497 females (SD = 60.5), an expected growth rate of 1.10 (95% credible interval 1.08-1.15) when population size was 441 females, and a rate of 0.90 (95% credible interval 0.87-0.93) for a population of 553 females. A model including effects of SOI did receive some support and indicated a positive relationship between SOI and population size. However, effects of SOI were not large, and including the effect did not greatly reduce our estimate of process variation. We speculate that direct density dependence occurred because rates of adult survival, breeding, and temporary emigration were affected by limitations on per capita food resources and space for parturition and pup-rearing. To improve understanding of the relative roles of various demographic components and their associated vital rates to population growth rate, mark-recapture methods can be applied that incorporate both environmental covariates and the seal abundance estimates that were developed here. An improved understanding of why vital rates change with changing population abundance will only come as we develop a better understanding of the processes affecting marine food resources in the Southern Ocean.
Lidar method to estimate emission rates from extended sources
USDA-ARS?s Scientific Manuscript database
Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...
ESTIMATE OF METHANE EMISSIONS FROM U.S. LANDFILLS
The report describes the development of a statistical regression model used for estimating methane (CH4) emissions, which relates landfill gas (LFG) flow rates to waste-in-place data from 105 landfills with LFG recovery projects. (NOTE: CH4 flow rates from landfills with LFG reco...
USDA-ARS?s Scientific Manuscript database
Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...
Rates of microbial metabolism in deep coastal plain aquifers
Chapelle, F.H.; Lovley, D.R.
1990-01-01
Rates of microbial metabolism in deep anaerobic aquifers of the Atlantic coastal plain of South Carolina were investigated by both microbiological and geochemical techniques. Rates of [2-14C]acetate and [U-14C]glucose oxidation as well as geochemical evidence indicated that metabolic rates were faster in the sandy sediments composing the aquifers than in the clayey sediments of the confining layers. In the sandy aquifer sediments, estimates of the rates of CO2 production (millimoles of CO2 per liter per year) based on the oxidation of [2-14C]acetate were 9.4 x 10-3 to 2.4 x 10-1 for the Black Creek aquifer, 1.1 x 10-2 for the Middendorf aquifer, and <7 x 10-5 for the Cape Fear aquifer. These estimates were at least 2 orders of magnitude lower than previously published estimates that were based on the accumulation of CO2 in laboratory incubations of similar deep subsurface sediments. In contrast, geochemical modeling of groundwater chemistry changes along aquifer flowpaths gave rate estimates that ranged from 10-4 to 10-6 mmol of CO2 per liter per year. The age of these sediments (ca. 80 million years) and their organic carbon content suggest that average rates of CO2 production could have been no more than 10-4 mmol per liter per year. Thus, laboratory incubations may greatly overestimate the in situ rates of microbial metabolism in deep subsurface environments. This has important implications for the use of laboratory incubations in attempts to estimate biorestoration capacities of deep aquifers. The rate estimates from geochemical modeling indicate that deep aquifers are among the most oligotrophic aquatic environments in which there is ongoing microbial metabolism.
Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.
Ryan, Andrew M; Burgess, James F; Dimick, Justin B
2015-08-01
To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.
ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION
SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.
2015-01-01
Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814
Modelling the cost effectiveness of antidepressant treatment in primary care.
Revicki, D A; Brown, R E; Palmer, W; Bakish, D; Rosser, W W; Anton, S F; Feeny, D
1995-12-01
The aim of this study was to estimate the cost effectiveness of nefazodone compared with imipramine or fluoxetine in treating women with major depressive disorder. Clinical decision analysis and a Markov state-transition model were used to estimate the lifetime health outcomes and medical costs of 3 antidepressant treatments. The model, which represents ideal primary care practice, compares treatment with nefazodone to treatment with either imipramine or fluoxetine. The economic analysis was based on the healthcare system of the Canadian province of Ontario, and considered only direct medical costs. Health outcomes were expressed as quality-adjusted life years (QALYs) and costs were in 1993 Canadian dollars ($Can; $Can1 = $US0.75, September 1995). Incremental cost-utility ratios were calculated comparing the relative lifetime discounted medical costs and QALYs associated with nefazodone with those of imipramine or fluoxetine. Data for constructing the model and estimating necessary parameters were derived from the medical literature, clinical trial data, and physician judgement. Data included information on: Ontario primary care physicians' clinical management of major depression; medical resource use and costs; probabilities of recurrence of depression; suicide rates; compliance rates; and health utilities. Estimates of utilities for depression-related hypothetical health states were obtained from patients with major depression (n = 70). Medical costs and QALYs were discounted to present value using a 5% rate. Sensitivity analyses tested the assumptions of the model by varying the discount rate, depression recurrence rates, compliance rates, and the duration of the model. The base case analysis found that nefazodone treatment costs $Can1447 less per patient than imipramine treatment (discounted lifetime medical costs were $Can50,664 vs $Can52,111) and increases the number of QALYs by 0.72 (13.90 vs 13.18). Nefazodone treatment costs $Can14 less than fluoxetine treatment (estimated discounted lifetime medical costs were $Can50,664 vs $Can50,678) and produces slightly more QALYs (13.90 vs 13.79). In the sensitivity analyses, the cost-effectiveness ratios comparing nefazodone with imipramine ranged from cost saving to $Can17,326 per QALY gained. The cost-effectiveness ratios comparing nefazodone with fluoxetine ranged from cost saving to $Can7327 per QALY gained. The model was most sensitive to assumptions about treatment compliance rates and recurrence rates. The findings suggest that nefazodone may be a cost-effective treatment for major depression compared with imipramine or fluoxetine. The basic findings and conclusions do not change even after modifying model parameters within reasonable ranges.
Relationship between root water uptake and soil respiration: A modeling perspective
NASA Astrophysics Data System (ADS)
Teodosio, Bertrand; Pauwels, Valentijn R. N.; Loheide, Steven P.; Daly, Edoardo
2017-08-01
Soil moisture affects and is affected by root water uptake and at the same time drives soil CO2 dynamics. Selecting root water uptake formulations in models is important since this affects the estimation of actual transpiration and soil CO2 efflux. This study aims to compare different models combining the Richards equation for soil water flow to equations describing heat transfer and air-phase CO2 production and flow. A root water uptake model (RWC), accounting only for root water compensation by rescaling water uptake rates across the vertical profile, was compared to a model (XWP) estimating water uptake as a function of the difference between soil and root xylem water potential; the latter model can account for both compensation (XWPRWC) and hydraulic redistribution (XWPHR). Models were compared in a scenario with a shallow water table, where the formulation of root water uptake plays an important role in modeling daily patterns and magnitudes of transpiration rates and CO2 efflux. Model simulations for this scenario indicated up to 20% difference in the estimated water that transpired over 50 days and up to 14% difference in carbon emitted from the soil. The models showed reduction of transpiration rates associated with water stress affecting soil CO2 efflux, with magnitudes of soil CO2 efflux being larger for the XWPHR model in wet conditions and for the RWC model as the soil dried down. The study shows the importance of choosing root water uptake models not only for estimating transpiration but also for other processes controlled by soil water content.
Modeling storms improves estimates of long-term shoreline change
NASA Astrophysics Data System (ADS)
Frazer, L. Neil; Anderson, Tiffany R.; Fletcher, Charles H.
2009-10-01
Large storms make it difficult to extract the long-term trend of erosion or accretion from shoreline position data. Here we make storms part of the shoreline change model by means of a storm function. The data determine storm amplitudes and the rate at which the shoreline recovers from storms. Historical shoreline data are temporally sparse, and inclusion of all storms in one model over-fits the data, but a probability-weighted average model shows effects from all storms, illustrating how model averaging incorporates information from good models that might otherwise have been discarded as un-parsimonious. Data from Cotton Patch Hill, DE, yield a long-term shoreline loss rate of 0.49 ± 0.01 m/yr, about 16% less than published estimates. A minimum loss rate of 0.34 ± 0.01 m/yr is given by a model containing the 1929, 1962 and 1992 storms.
Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki
2010-09-01
A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cochran, R.C.
1985-01-01
Procedures used in estimating ruminal particle turnover and diet digestibility were evaluated in a series of independent experiments. Experiment 1 and 2 evaluated the influence of sampling site, mathematical model and intraruminal mixing on estimates of ruminal particle turnover in beef steers grazing crested wheatgrass or offered ad libitum levels of prairie hay once daily, respectively. Particle turnover rate constants were estimated by intraruminal administration (via rumen cannula) of ytterbium (Yb)-labeled forage, followed by serial collection of rumen digesta or fecal samples. Rumen Yb concentrations were transformed to natural logarithms and regressed on time. Influence of sampling site (rectum versusmore » rumen) on turnover estimates was modified by the model used to fit fecal marker excretion curves in the grazing study. In contrast, estimated turnover rate constants from rumen sampling were smaller (P < 0.05) than rectally derived rate constants, regardless of fecal model used, when steers were fed once daily. In Experiment 3, in vitro residues subjected to acid or neutral detergent fiber extraction (IVADF and IVNDF), acid detergent fiber incubated in cellulase (ADFIC) and acid detergent lignin (ADL) were evaluated as internal markers for predicting diet digestibility. Both IVADF and IVNDF displayed variable accuracy for prediction of in vivo digestibility whereas ADL and ADFIC inaccurately predicted digestibility of all diets.« less
Comparative evaluation of urban storm water quality models
NASA Astrophysics Data System (ADS)
Vaze, J.; Chiew, Francis H. S.
2003-10-01
The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.
Gürtler, Ricardo E; Cecere, María C; Vázquez-Prokopec, Gonzalo M; Ceballos, Leonardo A; Gurevitz, Juan M; Fernández, María Del Pilar; Kitron, Uriel; Cohen, Joel E
2014-01-01
The host species composition in a household and their relative availability affect the host-feeding choices of blood-sucking insects and parasite transmission risks. We investigated four hypotheses regarding factors that affect blood-feeding rates, proportion of human-fed bugs (human blood index), and daily human-feeding rates of Triatoma infestans, the main vector of Chagas disease. A cross-sectional survey collected triatomines in human sleeping quarters (domiciles) of 49 of 270 rural houses in northwestern Argentina. We developed an improved way of estimating the human-feeding rate of domestic T. infestans populations. We fitted generalized linear mixed-effects models to a global model with six explanatory variables (chicken blood index, dog blood index, bug stage, numbers of human residents, bug abundance, and maximum temperature during the night preceding bug catch) and three response variables (daily blood-feeding rate, human blood index, and daily human-feeding rate). Coefficients were estimated via multimodel inference with model averaging. Median blood-feeding intervals per late-stage bug were 4.1 days, with large variations among households. The main bloodmeal sources were humans (68%), chickens (22%), and dogs (9%). Blood-feeding rates decreased with increases in the chicken blood index. Both the human blood index and daily human-feeding rate decreased substantially with increasing proportions of chicken- or dog-fed bugs, or the presence of chickens indoors. Improved calculations estimated the mean daily human-feeding rate per late-stage bug at 0.231 (95% confidence interval, 0.157-0.305). Based on the changing availability of chickens in domiciles during spring-summer and the much larger infectivity of dogs compared with humans, we infer that the net effects of chickens in the presence of transmission-competent hosts may be more adequately described by zoopotentiation than by zooprophylaxis. Domestic animals in domiciles profoundly affect the host-feeding choices, human-vector contact rates and parasite transmission predicted by a model based on these estimates.
NASA Astrophysics Data System (ADS)
Huang, Y.; Zhan, H.; Knappett, P.
2017-12-01
Past studies modeling stream-aquifer interactions commonly account for vertical anisotropy, but rarely address horizontal anisotropy, which does exist in certain geological settings. Horizontal anisotropy is impacted by sediment deposition rates, orientation of sediment particles and orientations of fractures etc. We hypothesize that horizontal anisotropy controls the volume of recharge a pumped aquifer captures from the river. To test this hypothesis, a new mathematical model was developed to describe the distribution of drawdown from stream-bank pumping with a well screened across a horizontally anisotropic, confined aquifer, laterally bounded by a river. This new model was used to determine four aquifer parameters including the magnitude and directions of major and minor principal transmissivities and storativity based on the observed drawdown-time curves within a minimum of three non-collinear observation wells. By comparing the aquifer parameters values estimated from drawdown data generated known values, the discrepancies of the major and minor transmissivities, horizontal anisotropy ratio, storativity and the direction of major transmissivity were 13.1, 8.8, 4, 0 and <1 percent, respectively. These discrepancies are well within acceptable ranges of uncertainty for aquifer parameters estimation, when compared with other pumping test interpretation methods, which typically estimate uncertainty for the estimated parameters of 20 or 30 percent. Finally, the stream depletion rate was calculated as a function of stream-bank pumping. Unique to horizontally anisotropic aquifer, the stream depletion rate at any given pumping rate depends on the horizontal anisotropy ratio and the direction of the principle transmissivity. For example, when horizontal anisotropy ratios are 5 and 50 respectively, the corresponding depletion rate under pseudo steady-state condition are 86 m3/day and 91 m3/day. The results of this research fill a knowledge gap on predicting the response of horizontally anisotropic aquifers connected to streams. We further provide a method to estimate aquifer properties and predict stream depletion rates from observed drawdown. This new model can be used by water resources managers to exploit groundwater resource reasonably while protecting stream ecosystem.
Estimating the time for dissolution of spent fuel exposed to unlimited water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leider, H.R.; Nguyen, S.N.; Stout, R.B.
1991-12-01
The release of radionuclides from spent fuel cannot be precisely predicted at this point because a satisfactory dissolution model based on specific chemical processes is not yet available. However, preliminary results on the dissolution rate of UO{sub 2} and spent fuel as a function of temperature and water composition have recently been reported. This information, together with data on fragment size distribution of spent fuel, are used to estimate the dissolution response of spent fuel in excess flowing water within the framework of a simple model. In this model, the reaction/dissolution front advances linearly with time and geometry is preserved.more » This also estimates the dissolution rate of the bulk of the fission products and higher actinides, which are uniformly distributed in the UO{sub 2} matrix and are presumed to dissolve congruently. We have used a fuel fragment distribution actually observed to calculate the time for total dissolution of spent fuel. A worst-case estimate was also made using the initial (maximum) rate of dissolution to predict the total dissolution time. The time for total dissolution of centimeter size particles is estimated to be 5.5 {times} 10{sup 4} years at 25{degrees}C.« less
Monitoring survival rates of Swainson's Thrush Catharus ustulatus at multiple spatial scales
Rosenberg, D.K.; DeSante, D.F.; McKelvey, K.S.; Hines, J.E.
1999-01-01
We estimated survival rates of Swainson's Thrush, a common, neotropical, migratory landbird, at multiple spatial scales, using data collected in the western USA from the Monitoring Avian Productivity and Survivorship Programme. We evaluated statistical power to detect spatially heterogeneous survival rates and exponentially declining survival rates among spatial scales with simulated populations parameterized from results of the Swainson's Thrush analyses. Models describing survival rates as constant across large spatial scales did not fit the data. The model we chose as most appropriate to describe survival rates of Swainson's Thrush allowed survival rates to vary among Physiographic Provinces, included a separate parameter for the probability that a newly captured bird is a resident individual in the study population, and constrained capture probability to be constant across all stations. Estimated annual survival rates under this model varied from 0.42 to 0.75 among Provinces. The coefficient of variation of survival estimates ranged from 5.8 to 20% among Physiographic Provinces. Statistical power to detect exponentially declining trends was fairly low for small spatial scales, although large annual declines (3% of previous year's rate) were likely to be detected when monitoring was conducted for long periods of time (e.g. 20 years). Although our simulations and field results are based on only four years of data from a limited number and distribution of stations, it is likely that they illustrate genuine difficulties inherent to broadscale efforts to monitor survival rates of territorial landbirds. In particular, our results suggest that more attention needs to be paid to sampling schemes of monitoring programmes, particularly regarding the trade-off between precision and potential bias of parameter estimates at varying spatial scales.
Monitoring survival rates of Swainson's Thrush Catharus ustulatus at multiple spatial scales
Rosenberg, D.K.; DeSante, D.F.; McKelvey, K.S.; Hines, J.E.
1999-01-01
We estimated survival rates of Swainson's Thrush, a common, neotropical, migratory landbird, at multiple spatial scales, using data collected in the western USA from the Monitoring Avian Productivity and Survivorship Programme. We evaluated statistical power to detect spatially heterogeneous survival rates and exponentially declining survival rates among spatial scales with simulated populations parameterized from results of the Swainson's Thrush analyses. Models describing survival rates as constant across large spatial scales did not fit the data. The model we chose as most appropriate to describe survival rates of Swainson's Thrush allowed survival rates to vary among Physiographic Provinces, included a separate parameter for the probability that a newly captured bird is a resident individual in the study population, and constrained capture probability to be constant across all stations. Estimated annual survival rates under this model varied from 0.42 to 0.75 among Provinces. The coefficient of variation of survival estimates ranged from 5.8 to 20% among Physiographic Provinces. Statistical power to detect exponentially declining trends was fairly low for small spatial scales, although large annual declines (3% of previous year's rate) were likely to be detected when monitoring was conducted for long periods of time (e.g. 20 years). Although our simulations and field results are based on only four years of date from a limited number and distribution of stations, it is likely that they illustrate genuine difficulties inherent to broadscale efforts to monitor survival rates of territorial landbirds. In particular, our results suggest that more attention needs to be paid to sampling schemes of monitoring programmes particularly regarding the trade-off between precison and potential bias o parameter estimates at varying spatial scales.
NASA Astrophysics Data System (ADS)
Anderson, Kyle R.; Poland, Michael P.
2016-08-01
Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35-100% between 2001 and 2006 (from 0.11-0.17 to 0.18-0.28 km3/yr), before subsequently decreasing to 0.08-0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60-150% between 2001 and 2006, and subsequent decline by as much as 60% by 2012. We also demonstrate the occurrence of temporal changes in the proportion of Kīlauea's magma supply that is stored versus erupted, with the supply ;surge; in 2006 associated with increased accumulation of magma at the summit. Finally, we are able to place some constraints on sulfur concentrations in Kīlauea magma and the scrubbing of sulfur by the volcano's hydrothermal system. Multiphysical, Bayesian constraint on magma flow rates may be used to monitor evolving volcanic hazard not just at Kīlauea but at other volcanoes around the world.
Krajewski, C; Fain, M G; Buckley, L; King, D G
1999-11-01
ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.
Rate variation and estimation of divergence times using strict and relaxed clocks.
Brown, Richard P; Yang, Ziheng
2011-09-26
Understanding causes of biological diversity may be greatly enhanced by knowledge of divergence times. Strict and relaxed clock models are used in Bayesian estimation of divergence times. We examined whether: i) strict clock models are generally more appropriate in shallow phylogenies where rate variation is expected to be low, ii) the likelihood ratio test of the clock (LRT) reliably informs which model is appropriate for dating divergence times. Strict and relaxed models were used to analyse sequences simulated under different levels of rate variation. Published shallow phylogenies (Black bass, Primate-sucking lice, Podarcis lizards, Gallotiinae lizards, and Caprinae mammals) were also analysed to determine natural levels of rate variation relative to the performance of the different models. Strict clock analyses performed well on data simulated under the independent rates model when the standard deviation of log rate on branches, σ, was low (≤ 0.1), but were inappropriate when σ>0.1 (95% of rates fall within 0.0082-0.0121 subs/site/Ma when σ = 0.1, for a mean rate of 0.01). The independent rates relaxed clock model performed well at all levels of rate variation, although posterior intervals on times were significantly wider than for the strict clock. The strict clock is therefore superior when rate variation is low. The performance of a correlated rates relaxed clock model was similar to the strict clock. Increased numbers of independent loci led to slightly narrower posteriors under the relaxed clock while older root ages provided proportionately narrower posteriors. The LRT had low power for σ = 0.01-0.1, but high power for σ = 0.5-2.0. Posterior means of σ2 were useful for assessing rate variation in published datasets. Estimates of natural levels of rate variation ranged from 0.05-3.38 for different partitions. Differences in divergence times between relaxed and strict clock analyses were greater in two datasets with higher σ2 for one or more partitions, supporting the simulation results. The strict clock can be superior for trees with shallow roots because of low levels of rate variation between branches. The LRT allows robust assessment of suitability of the clock model as does examination of posteriors on σ2.
A Broadband Microwave Radiometer Technique at X-band for Rain and Drop Size Distribution Estimation
NASA Technical Reports Server (NTRS)
Meneghini, R.
2005-01-01
Radiometric brightess temperatures below about 12 GHz provide accurate estimates of path attenuation through precipitation and cloud water. Multiple brightness temperature measurements at X-band frequencies can be used to estimate rainfall rate and parameters of the drop size distribution once correction for cloud water attenuation is made. Employing a stratiform storm model, calculations of the brightness temperatures at 9.5, 10 and 12 GHz are used to simulate estimates of path-averaged median mass diameter, number concentration and rainfall rate. The results indicate that reasonably accurate estimates of rainfall rate and information on the drop size distribution can be derived over ocean under low to moderate wind speed conditions.
Wang, S; Sun, Z; Wang, S
1996-11-01
A prospective follow-up study of 539 advanced gastric carcinoma patients after resection was undertaken between 1 January 1980 and 31 December 1989, with a follow-up rate of 95.36%. A multivariate analysis of possible factors influencing survival of these patients was performed, and their predicting models of survival rates was established by Cox proportional hazard model. The results showed that the major significant prognostic factors influencing survival of these patients were rate and station of lymph node metastases, type of operation, hepatic metastases, size of tumor, age and location of tumor. The most important factor was the rate of lymph node metastases. According to their regression coefficients, the predicting value (PV) of each patient was calculated, then all patients were divided into five risk groups according to PV, their predicting models of survival rates after resection were established in groups. The goodness-fit of estimated predicting models of survival rates were checked by fitting curve and residual plot, and the estimated models tallied with the actual situation. The results suggest that the patients with advanced gastric cancer after resection without lymph node metastases and hepatic metastases had a better prognosis, and their survival probability may be predicted according to the predicting model of survival rates.
Integrated Model for Performance Analysis of All-Optical Multihop Packet Switches
NASA Astrophysics Data System (ADS)
Jeong, Han-You; Seo, Seung-Woo
2000-09-01
The overall performance of an all-optical packet switching system is usually determined by two criteria, i.e., switching latency and packet loss rate. In some real-time applications, however, in which packets arriving later than a timeout period are discarded as loss, the packet loss rate becomes the most dominant criterion for system performance. Here we focus on evaluating the performance of all-optical packet switches in terms of the packet loss rate, which normally arises from the insufficient hardware or the degradation of an optical signal. Considering both aspects, we propose what we believe is a new analysis model for the packet loss rate that reflects the complicated interactions between physical impairments and system-level parameters. On the basis of the estimation model for signal quality degradation in a multihop path we construct an equivalent analysis model of a switching network for evaluating an average bit error rate. With the model constructed we then propose an integrated model for estimating the packet loss rate in three architectural examples of multihop packet switches, each of which is based on a different switching concept. We also derive the bounds on the packet loss rate induced by bit errors. Finally, it is verified through simulation studies that our analysis model accurately predicts system performance.
Callahan, Melissa S; McPeek, Mark A
2016-01-01
Reconstructing evolutionary patterns of species and populations provides a framework for asking questions about the impacts of climate change. Here we use a multilocus dataset to estimate gene trees under maximum likelihood and Bayesian models to obtain a robust estimate of relationships for a genus of North American damselflies, Enallagma. Using a relaxed molecular clock, we estimate the divergence times for this group. Furthermore, to account for the fact that gene tree analyses can overestimate ages of population divergences, we use a multi-population coalescent model to gain a more accurate estimate of divergence times. We also infer diversification rates using a method that allows for variation in diversification rate through time and among lineages. Our results reveal a complex evolutionary history of Enallagma, in which divergence events both predate and occur during Pleistocene climate fluctuations. There is also evidence of diversification rate heterogeneity across the tree. These divergence time estimates provide a foundation for addressing the relative significance of historical climatic events in the diversification of this genus. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
Steiner, Zvi; Erez, Jonathan; Shemesh, Aldo; Yam, Ruth; Katz, Amitai; Lazar, Boaz
2014-11-18
Basin-scale calcification rates are highly important in assessments of the global oceanic carbon cycle. Traditionally, such estimates were based on rates of sedimentation measured with sediment traps or in deep sea cores. Here we estimated CaCO3 precipitation rates in the surface water of the Red Sea from total alkalinity depletion along their axial flow using the water flux in the straits of Bab el Mandeb. The relative contribution of coral reefs and open sea plankton were calculated by fitting a Rayleigh distillation model to the increase in the strontium to calcium ratio. We estimate the net amount of CaCO3 precipitated in the Red Sea to be 7.3 ± 0.4·10(10) kg·y(-1) of which 80 ± 5% is by pelagic calcareous plankton and 20 ± 5% is by the flourishing coastal coral reefs. This estimate for pelagic calcification rate is up to 40% higher than published sedimentary CaCO3 accumulation rates for the region. The calcification rate of the Gulf of Aden was estimated by the Rayleigh model to be ∼1/2 of the Red Sea, and in the northwestern Indian Ocean, it was smaller than our detection limit. The results of this study suggest that variations of major ions on a basin scale may potentially help in assessing long-term effects of ocean acidification on carbonate deposition by marine organisms.
Steiner, Zvi; Erez, Jonathan; Shemesh, Aldo; Yam, Ruth; Katz, Amitai; Lazar, Boaz
2014-01-01
Basin-scale calcification rates are highly important in assessments of the global oceanic carbon cycle. Traditionally, such estimates were based on rates of sedimentation measured with sediment traps or in deep sea cores. Here we estimated CaCO3 precipitation rates in the surface water of the Red Sea from total alkalinity depletion along their axial flow using the water flux in the straits of Bab el Mandeb. The relative contribution of coral reefs and open sea plankton were calculated by fitting a Rayleigh distillation model to the increase in the strontium to calcium ratio. We estimate the net amount of CaCO3 precipitated in the Red Sea to be 7.3 ± 0.4·1010 kg·y−1 of which 80 ± 5% is by pelagic calcareous plankton and 20 ± 5% is by the flourishing coastal coral reefs. This estimate for pelagic calcification rate is up to 40% higher than published sedimentary CaCO3 accumulation rates for the region. The calcification rate of the Gulf of Aden was estimated by the Rayleigh model to be ∼1/2 of the Red Sea, and in the northwestern Indian Ocean, it was smaller than our detection limit. The results of this study suggest that variations of major ions on a basin scale may potentially help in assessing long-term effects of ocean acidification on carbonate deposition by marine organisms. PMID:25368148
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
Inverse modeling of BTEX dissolution and biodegradation at the Bemidji, MN crude-oil spill site
Essaid, H.I.; Cozzarelli, I.M.; Eganhouse, R.P.; Herkelrath, W.N.; Bekins, B.A.; Delin, G.N.
2003-01-01
The U.S. Geological Survey (USGS) solute transport and biodegradation code BIOMOC was used in conjunction with the USGS universal inverse modeling code UCODE to quantify field-scale hydrocarbon dissolution and biodegradation at the USGS Toxic Substances Hydrology Program crude-oil spill research site located near Bemidji, MN. This inverse modeling effort used the extensive historical data compiled at the Bemidji site from 1986 to 1997 and incorporated a multicomponent transport and biodegradation model. Inverse modeling was successful when coupled transport and degradation processes were incorporated into the model and a single dissolution rate coefficient was used for all BTEX components. Assuming a stationary oil body, we simulated benzene, toluene, ethylbenzene, m,p-xylene, and o-xylene (BTEX) concentrations in the oil and ground water, respectively, as well as dissolved oxygen. Dissolution from the oil phase and aerobic and anaerobic degradation processes were represented. The parameters estimated were the recharge rate, hydraulic conductivity, dissolution rate coefficient, individual first-order BTEX anaerobic degradation rates, and transverse dispersivity. Results were similar for simulations obtained using several alternative conceptual models of the hydrologic system and biodegradation processes. The dissolved BTEX concentration data were not sufficient to discriminate between these conceptual models. The calibrated simulations reproduced the general large-scale evolution of the plume, but did not reproduce the observed small-scale spatial and temporal variability in concentrations. The estimated anaerobic biodegradation rates for toluene and o-xylene were greater than the dissolution rate coefficient. However, the estimated anaerobic biodegradation rates for benzene, ethylbenzene, and m,p-xylene were less than the dissolution rate coefficient. The calibrated model was used to determine the BTEX mass balance in the oil body and groundwater plume. Dissolution from the oil body was greatest for compounds with large effective solubilities (benzene) and with large degradation rates (toluene and o-xylene). Anaerobic degradation removed 77% of the BTEX that dissolved into the water phase and aerobic degradation removed 17%. Although goodness-of-fit measures for the alternative conceptual models were not significantly different, predictions made with the models were quite variable. ?? 2003 Elsevier Science B.V. All rights reserved.
Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population
Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel
2002-01-01
A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
A Stochastic Evolutionary Model for Protein Structure Alignment and Phylogeny
Challis, Christopher J.; Schmidler, Scott C.
2012-01-01
We present a stochastic process model for the joint evolution of protein primary and tertiary structure, suitable for use in alignment and estimation of phylogeny. Indels arise from a classic Links model, and mutations follow a standard substitution matrix, whereas backbone atoms diffuse in three-dimensional space according to an Ornstein–Uhlenbeck process. The model allows for simultaneous estimation of evolutionary distances, indel rates, structural drift rates, and alignments, while fully accounting for uncertainty. The inclusion of structural information enables phylogenetic inference on time scales not previously attainable with sequence evolution models. The model also provides a tool for testing evolutionary hypotheses and improving our understanding of protein structural evolution. PMID:22723302
Kim, Tane; Hao, Weilong
2014-09-27
The study of discrete characters is crucial for the understanding of evolutionary processes. Even though great advances have been made in the analysis of nucleotide sequences, computer programs for non-DNA discrete characters are often dedicated to specific analyses and lack flexibility. Discrete characters often have different transition rate matrices, variable rates among sites and sometimes contain unobservable states. To obtain the ability to accurately estimate a variety of discrete characters, programs with sophisticated methodologies and flexible settings are desired. DiscML performs maximum likelihood estimation for evolutionary rates of discrete characters on a provided phylogeny with the options that correct for unobservable data, rate variations, and unknown prior root probabilities from the empirical data. It gives users options to customize the instantaneous transition rate matrices, or to choose pre-determined matrices from models such as birth-and-death (BD), birth-death-and-innovation (BDI), equal rates (ER), symmetric (SYM), general time-reversible (GTR) and all rates different (ARD). Moreover, we show application examples of DiscML on gene family data and on intron presence/absence data. DiscML was developed as a unified R program for estimating evolutionary rates of discrete characters with no restriction on the number of character states, and with flexibility to use different transition models. DiscML is ideal for the analyses of binary (1s/0s) patterns, multi-gene families, and multistate discrete morphological characteristics.
Wang, Chaoyuan; Li, Baoming; Zhang, Guoqiang; Rom, Hans Benny; Strøom, Jan S
2006-09-01
Laboratory experiments were carried out in a wind tunnel with a model of a slurry pit to investigate the characteristics of ammonia emission from dairy cattle buildings with slatted floor designs. Ammonia emission at different temperatures and air velocities over the floor surface above the slurry pit was measured with uniform feces spreading and urine sprinkling on the surface daily. The data were used to improve a model for estimation of ammonia emission from dairy cattle buildings. Estimates from the updated emission model were compared with measured data from five naturally ventilated dairy cattle buildings. The overall measured ammonia emission rates were in the range of 11-88 g per cow per day at air temperatures of 2.3-22.4 degrees C. Ammonia emission rates estimated by the model were in the range of 19-107 g per cow per day for the surveyed buildings. The average ammonia emission estimated by the model was 11% higher than the mean measured value. The results show that predicted emission patterns generally agree with the measured one, but the prediction has less variation. The model performance may be improved if the influence of animal activity and management strategy on ammonia emission could be estimated and more reliable data of air velocities of the buildings could be obtained.
Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...
Simplification of an MCNP model designed for dose rate estimation
NASA Astrophysics Data System (ADS)
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Functional response models to estimate feeding rates of wading birds
Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.
2010-01-01
Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P < 0.05). Substantial discrepancies between the CM and HoII models were possible depending on flock sizes used to model feeding rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.
NASA Technical Reports Server (NTRS)
Hetherington, N. W.; Rosenblatt, L. S.; Higgins, E. A.; Winget, C. M.
1973-01-01
A mathematical model previously presented by Rosenblatt et al. (1973) for estimating the rates of resynchronization of individual biorhythms following transmeridian flights or photoperiod shifts is extended to estimation of rates at which two biorythms resynchronize with respect to each other. Such quantification of the rate of restoration of the initial phase relationship of the two biorhythms is pointed out as a valuable tool in the study of internal desynchronosis.
USDA-ARS?s Scientific Manuscript database
Measuring metabolic energy expenditure in humans may provide a means of monitoring and reducing obesity, estimating nutritional requirements, reducing obesity, maintaining energy balance during athletics, and modeling human thermoregulatory responses. However, measuring metabolic rate (M) is challen...
Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
Angular-Rate Estimation Using Delayed Quaternion Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.
Huhtanen, P; Seppälä, A; Ahvenjärvi, S; Rinne, M
2008-10-01
Eleven 1-pool, seven 2-pool, and three 3-pool models were compared in fitting gas production data and predicting in vivo NDF digestibility and effective first-order digestion rate of potentially digestible NDF (pdNDF). Isolated NDF from 15 grass silages harvested at different stages of maturity was incubated in triplicate in rumen fluid-buffer solution for 72 h to estimate the digestion kinetics from cumulative gas production profiles. In vivo digestibility was estimated by the total fecal collection method in sheep fed at a maintenance level of feeding. The concentration of pdNDF was estimated by a 12-d in situ incubation. The parameter values from gas production profiles and pdNDF were used in a 2-compartment rumen model to predict pdNDF digestibility using 50 h of rumen residence time distributed in a ratio of 0.4:0.6 between the non-escapable and escapable pools. The effective first-order digestion rate was computed both from observed in vivo and model-predicted pdNDF digestibility assuming the passage kinetic model described above. There were marked differences between the models in fitting the gas production data. The fit improved with increasing number of pools, suggesting that silage pdNDF is not a homogenous substrate. Generally, the models predicted in vivo NDF digestibility and digestion rate accurately. However, a good fit of gas production data was not necessarily translated into improved predictions of the in vivo data. The models overestimating the asymptotic gas volumes tended to underestimate the in vivo digestibility. Investigating the time-related residuals during the later phases of fermentation is important when the data are used to estimate the first-order digestion rate of pdNDF. Relatively simple models such as the France model or even a single exponential model with discrete lag period satisfied the minimum criteria for a good model. Further, the comparison of feedstuffs on the basis of parameter values is more unequivocal than in the case of multiple-pool models.
Extracting volatility signal using maximum a posteriori estimation
NASA Astrophysics Data System (ADS)
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Estimation of hydrolysis rate constants for carbamates ...
Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp
Kim, Yunhwan; Lee, Sunmi; Chu, Chaeshin; Choe, Seoyun; Hong, Saeme; Shin, Youngseo
2016-02-01
The outbreak of Middle Eastern respiratory syndrome coronavirus (MERS-CoV) was one of the major events in South Korea in 2015. In particular, this study pays attention to formulating a mathematical model for MERS transmission dynamics and estimating transmission rates. Incidence data of MERS-CoV from the government authority was analyzed for the first aim and a mathematical model was built and analyzed for the second aim of the study. A mathematical model for MERS-CoV transmission dynamics is used to estimate the transmission rates in two periods due to the implementation of intensive interventions. Using the estimates of the transmission rates, the basic reproduction number was estimated in two periods. Due to the superspreader, the basic reproduction number was very large in the first period; however, the basic reproduction number of the second period has reduced significantly after intensive interventions. It turned out to be the intensive isolation and quarantine interventions that were the most critical factors that prevented the spread of the MERS outbreak. The results are expected to be useful to devise more efficient intervention strategies in the future.
An empirical model for global earthquake fatality estimation
Jaiswal, Kishor; Wald, David
2010-01-01
We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.
NASA Astrophysics Data System (ADS)
Phanikumar, Mantha S.; McGuire, Jennifer T.
2010-08-01
Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.
Modeled occupational exposures to gas-phase medical laser-generated air contaminants.
Lippert, Julia F; Lacey, Steven E; Jones, Rachael M
2014-01-01
Exposure monitoring data indicate the potential for substantive exposure to laser-generated air contaminants (LGAC); however the diversity of medical lasers and their applications limit generalization from direct workplace monitoring. Emission rates of seven previously reported gas-phase constituents of medical laser-generated air contaminants (LGAC) were determined experimentally and used in a semi-empirical two-zone model to estimate a range of plausible occupational exposures to health care staff. Single-source emission rates were generated in an emission chamber as a one-compartment mass balance model at steady-state. Clinical facility parameters such as room size and ventilation rate were based on standard ventilation and environmental conditions required for a laser surgical facility in compliance with regulatory agencies. All input variables in the model including point source emission rates were varied over an appropriate distribution in a Monte Carlo simulation to generate a range of time-weighted average (TWA) concentrations in the near and far field zones of the room in a conservative approach inclusive of all contributing factors to inform future predictive models. The concentrations were assessed for risk and the highest values were shown to be at least three orders of magnitude lower than the relevant occupational exposure limits (OELs). Estimated values do not appear to present a significant exposure hazard within the conditions of our emission rate estimates.
Angular-Rate Estimation Using Star Tracker Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, I.; Deutschmann, Julie K.; Harman, Richard R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quatemion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quatemion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quatemion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.
Angular-Rate Estimation using Star Tracker Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, Itzhack Y.; Deutschmann, Julie K.; Harman, Richard R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quaternion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Chloramine demand estimation using surrogate chemical and microbiological parameters.
Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose
2017-07-01
A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.
Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct
Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan
2013-01-01
Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650
Estimating a mosquito repellent's potential to reduce malaria in communities.
Kiszewski, A E; Darling, S T
2010-12-01
Probability models for assessing a mosquito repellent's potential to reduce malaria transmission are not readily available to public health researchers. To provide a means for estimating the epidemiological efficacy of mosquito repellents in communities, we developed a simple mathematical model. A static probability model is presented to simulate malaria infection in a community during a single transmission season. The model includes five parameters- sporozoite rate, human infection rate, biting pressure, repellent efficacy, and product-acceptance rate. The model assumes that a certain percentage of the population uses a personal mosquito repellent over the course of a seven-month transmission season and that this repellent maintains a constant rate of protective efficacy against the bites of malaria vectors. This model measures the probability of evading infection in circumstances where vector biting pressure, repellent efficacy, and product acceptance may vary. [corrected] Absolute protection using mosquito repellents alone requires high rates of repellent efficacy and product acceptance. [corrected] Using performance data from a highly effective repellent, the model estimates an 88.9% reduction of infections over a seven- month transmission season. A corresponding reduction in the incidence of super-infection in community members not completely evading infection can also be presumed. Thus, the model shows that mass distribution of a repellent with >98% efficacy and >98% product acceptance would suppress new malaria infections to levels lower than those achieved with insecticide treated nets (ITNs). A combination of both interventions could create synergies that result in reductions of disease burden significantly greater than with the use of ITNs alone.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Estimating the sources and transport of nutrients in the Waikato River Basin, New Zealand
Alexander, Richard B.; Elliott, Alexander H.; Shankar, Ude; McBride, Graham B.
2002-01-01
We calibrated SPARROW (Spatially Referenced Regression on Watershed Attributes) surface water‐quality models using measurements of total nitrogen and total phosphorus from 37 sites in the 13,900‐km2 Waikato River Basin, the largest watershed on the North Island of New Zealand. This first application of SPARROW outside of the United States included watersheds representative of a wide range of natural and cultural conditions and water‐resources data that were well suited for calibrating and validating the models. We applied the spatially distributed model to a drainage network of nearly 5000 stream reaches and 75 lakes and reservoirs to empirically estimate the rates of nutrient delivery (and their levels of uncertainty) from point and diffuse sources to streams, lakes, and watershed outlets. The resulting models displayed relatively small errors; predictions of stream yield (kg ha−1 yr−1) were typically within 30% or less of the observed values at the monitoring sites. There was strong evidence of the accuracy of the model estimates of nutrient sources and the natural rates of nutrient attenuation in surface waters. Estimated loss rates for streams, lakes, and reservoirs agreed closely with experimental measurements and empirical models from New Zealand, North America, and Europe as well as with previous U.S. SPARROW models. The results indicate that the SPARROW modeling technique provides a reliable method for relating experimental data and observations from small catchments to the transport of nutrients in the surface waters of large river basins.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
A model of northern pintail productivity and population growth rate
Flint, Paul L.; Grand, James B.; Rockwell, Robert F.
1998-01-01
Our objective was to synthesize individual components of reproductive ecology into a single estimate of productivity and to assess the relative effects of survival and productivity on population dynamics. We used information on nesting ecology, renesting potential, and duckling survival of northern pintails (Anas acuta) collected on the Yukon-Kuskokwim Delta (Y-K Delta), Alaska, 1991-95, to model the number of ducklings produced under a range of nest success and duckling survival probabilities. Using average values of 25% nest success, 11% duckling survival, and 56% renesting probability from our study population, we calculated that all young in our population were produced by 13% of the breeding females, and that early-nesting females produced more young than later-nesting females. Further, we calculated, on average, that each female produced only 0.16 young females/nesting season. We combined these results with estimates of first-year and adult survival to examine the growth rate (X) of the population and the relative contributions of these demographic parameters to that growth rate. Contrary to aerial survey data, the population projection model suggests our study population is declining rapidly (X = 0.6969). The relative effects on population growth rate were 0.1175 for reproductive success, 0.1175 for first-year survival, and 0.8825 for adult survival. Adult survival had the greatest influence on X for our population, and this conclusion was robust over a range of survival and productivity estimates. Given published estimates of annual survival for adult females (61%), our model suggested nest success and duckling survival need to increase to approximately 40% to achieve population stability. We discuss reasons for the apparent discrepancy in population trends between our model and aerial surveys in terms of bias in productivity and survival estimates.
Parameter estimation in tree graph metabolic networks.
Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J
2016-01-01
We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
Modelling small-area inequality in premature mortality using years of life lost rates
NASA Astrophysics Data System (ADS)
Congdon, Peter
2013-04-01
Analysis of premature mortality variations via standardized expected years of life lost (SEYLL) measures raises questions about suitable modelling for mortality data, especially when developing SEYLL profiles for areas with small populations. Existing fixed effects estimation methods take no account of correlations in mortality levels over ages, causes, socio-ethnic groups or areas. They also do not specify an underlying data generating process, or a likelihood model that can include trends or correlations, and are likely to produce unstable estimates for small-areas. An alternative strategy involves a fully specified data generation process, and a random effects model which "borrows strength" to produce stable SEYLL estimates, allowing for correlations between ages, areas and socio-ethnic groups. The resulting modelling strategy is applied to gender-specific differences in SEYLL rates in small-areas in NE London, and to cause-specific mortality for leading causes of premature mortality in these areas.
Verginelli, Iason; Pecoraro, Roberto; Baciocchi, Renato
2018-04-01
In this work, we introduce a screening method for the evaluation of the natural attenuation rates in the subsurface at sites contaminated by petroleum hydrocarbons. The method is based on the combination of the data obtained from standard source characterization with dynamic flux chambers measurements. The natural attenuation rates are calculated as difference between the flux of contaminants estimated with a non-reactive diffusive model starting from the concentrations of the contaminants detected in the source (soil and/or groundwater) and the effective emission rate of the contaminants measured using dynamic flux chambers installed at ground level. The reliability of this approach was tested in a contaminated site characterized by the presence of BTEX in soil and groundwater. Namely, the BTEX emission rates from the subsurface were measured in 4 seasonal campaigns using dynamic flux chambers installed in 14 sampling points. The comparison of measured fluxes with those predicted using a non-reactive diffusive model, starting from the source concentrations, showed that, in line with other recent studies, the modelling approach can overestimate the expected outdoor concentration of petroleum hydrocarbons even up to 4 orders of magnitude. On the other hand, by coupling the measured data with the fluxes estimated with the diffusive non-reactive model, it was possible to perform a mass balance to evaluate the natural attenuation loss rates of petroleum hydrocarbons during the migration from the source to ground level. Based on this comparison, the estimated BTEX loss rates in the test site were up to almost 0.5kg/year/m 2 . These rates are in line with the values reported in the recent literature for natural source zone depletion. In short, the method presented in this work can represent an easy-to-use and cost-effective option that can provide a further line of evidence of natural attenuation rates expected at contaminated sites. Copyright © 2017 Elsevier B.V. All rights reserved.
Revisiting the impact of macroeconomic conditions on health behaviours.
Di Pietro, Giorgio
2018-02-01
This paper estimates the average population effect of macroeconomic conditions on health behaviours accounting for the heterogeneous impact of the business cycle on individuals. While previous studies use models relying on area-specific unemployment rates to estimate this average effect, this paper employs a model based on area-specific unemployment rates by gender and age group. The rationale for breaking down unemployment rates is that the severity of cyclical upturns and downturns does not only significantly vary across geographical areas, but also across gender and age. The empirical analysis uses microdata from the Italian Multipurpose Household Survey on Everyday Life Issues. The estimates suggest that models employing aggregated and disaggregated unemployment rate measures as a proxy for the business cycle produce similar findings for some health behaviours (such as smoking), whereas different results are obtained for others. While using unemployment rates by gender and age group, fruits and/or vegetables consumption turns out to be procyclical (a 1pp increase in this unemployment rate decreases the probability of consuming at least five daily fruit and/or vegetable servings by 0.0016pp), the opposite effect, though statistically insignificant, is observed once general unemployment rates are used. While both models conclude that physical activity declines during economic downturns, the size of the procyclical effect is much smaller when employing disaggregated rather than aggregated unemployment rates (a 1pp increase in the unemployment rate by gender and age group decreases the probability of doing any physical activity by 0.0017pp). Copyright © 2017 Elsevier B.V. All rights reserved.
Residential water demand with endogenous pricing: The Canadian Case
NASA Astrophysics Data System (ADS)
Reynaud, Arnaud; Renzetti, Steven; Villeneuve, Michel
2005-11-01
In this paper, we show that the rate structure endogeneity may result in a misspecification of the residential water demand function. We propose to solve this endogeneity problem by estimating a probabilistic model describing how water rates are chosen by local communities. This model is estimated on a sample of Canadian local communities. We first show that the pricing structure choice reflects efficiency considerations, equity concerns, and, in some cases, a strategy of price discrimination across consumers by Canadian communities. Hence estimating the residential water demand without taking into account the pricing structures' endogeneity leads to a biased estimation of price and income elasticities. We also demonstrate that the pricing structure per se plays a significant role in influencing price responsiveness of Canadian residential consumers.
Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.
2016-01-01
Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.
Dassis, M; Rodríguez, D H; Ieno, E N; Denuncio, P E; Loureiro, J; Davis, R W
2014-02-01
Bio-energetic models used to characterize an animal's energy budget require the accurate estimate of different variables such as the resting metabolic rate (RMR) and the heat increment of feeding (HIF). In this study, we estimated the in air RMR of wild juvenile South American fur seals (SAFS; Arctocephalus australis) temporarily held in captivity by measuring oxygen consumption while at rest in a postabsorptive condition. HIF, which is an increase in metabolic rate associated with digestion, assimilation and nutrient interconversion, was estimated as the difference in resting metabolic rate between the postabsorptive condition and the first 3.5h postprandial. As data were hierarchically structured, linear mixed effect models were used to compare RMR measures under both physiological conditions. Results indicated a significant increase (61%) for the postprandial RMR compared to the postabsorptive condition, estimated at 17.93±1.84 and 11.15±1.91mL O2 min(-1)kg(-1), respectively. These values constitute the first estimation of RMR and HIF in this species, and should be considered in the energy budgets for juvenile SAFS foraging at-sea. Copyright © 2013 Elsevier Inc. All rights reserved.
Horton, G.E.; Dubreuil, T.L.; Letcher, B.H.
2007-01-01
Our goal was to understand movement and its interaction with survival for populations of stream salmonids at long-term study sites in the northeastern United States by employing passive integrated transponder (PIT) tags and associated technology. Although our PIT tag antenna arrays spanned the stream channel (at most flows) and were continuously operated, we are aware that aspects of fish behavior, environmental characteristics, and electronic limitations influenced our ability to detect 100% of the emigration from our stream site. Therefore, we required antenna efficiency estimates to adjust observed emigration rates. We obtained such estimates by testing a full-scale physical model of our PIT tag antenna array in a laboratory setting. From the physical model, we developed a statistical model that we used to predict efficiency in the field. The factors most important for predicting efficiency were external radio frequency signal and tag type. For most sampling intervals, there was concordance between the predicted and observed efficiencies, which allowed us to estimate the true emigration rate for our field populations of tagged salmonids. One caveat is that the model's utility may depend on its ability to characterize external radio frequency signals accurately. Another important consideration is the trade-off between the volume of data necessary to model efficiency accurately and the difficulty of storing and manipulating large amounts of data.
Estimation of homogeneous nucleation flux via a kinetic model
NASA Technical Reports Server (NTRS)
Wilcox, C. F.; Bauer, S. H.
1991-01-01
The proposed kinetic model for condensation under homogeneous conditions, and the onset of unidirectional cluster growth in supersaturated gases, does not suffer from the conceptual flaws that characterize classical nucleation theory. When a full set of simultaneous rate equation is solved, a characteristic time emerges, for each cluster size, at which the production rate, and its rate of conversion to the next size (n + 1) are equal. Procedures for estimating the essential parameters are proposed; condensation fluxes J(kin) exp ss are evaluated. Since there are practical limits to the cluster size that can be incorporated in the set of simultaneous first-order differential equations, a code was developed for computing an approximate J(th) exp ss based on estimates of a 'constrained equilibrium' distribution, and identification of its minimum.
NASA Astrophysics Data System (ADS)
Lee, Jongyeol; Kim, Moonil; Lakyda, Ivan; Pietsch, Stephan; Shvidenko, Anatoly; Kraxner, Florian; Forsell, Nicklas; Son, Yowhan
2016-04-01
There have been demands on reporting national forest carbon (C) inventories to mitigate global climate change. Global forestry models estimate growth of stem volume and C at various spatial and temporal scales but they do not consider dead organic matter (DOM) C. In this study, we simulated national forest C dynamics in South Korea with a calibrated global forestry model (G4M model) and a module of DOM C dynamics in Korean forest C model (FBDC model). 3890 simulation units (1-16 km2) were established in entire South Korea. Growth functions of stem for major tree species (Pinus densiflora, P. rigida, Larix kaempferi, Quercus variabilis, Q. mongolica, and Q. acutissima) were estimated by internal mechanism of G4M model and Korean yield tables. C dynamics in DOMs were determined by balance between input and output (decomposition) of DOMs in the FBDC model. Annual input of DOM was estimated by multiplying C stock of biomass compartment with turnover rate. Decomposition of DOM was estimated by C stock of DOM, mean air temperature, and decay rate. C stock in each C pool was initialized by spin-up process with consideration of severe deforestation by Japanese exploitation and Korean War. No disturbance was included in the simulation process. Total forest C stock (Tg C) and mean C density (Mg C ha-1) decreased from 657.9 and 112.1 in 1954 to 607.2 and 103.4 in 1973. Especially, C stock in mineral soil decreased at a rate of 0.5 Mg C ha-1 yr-1 during the period due to suppression of regeneration. However, total forest C stock (Tg C) and mean C density (Mg C ha-1) gradually increased from 607.0 and 103.4 in 1974 to 1240.7 and 211.3 in 2015 due to the national reforestation program since 1973. After the reforestation program, Korean forests became C sinks. Model estimates were also verified by comparison of these estimates and national forest inventory data (2006-2010). High similarity between the model estimates and the inventory data showed a reliability of down-scaled global forestry model and integration of DOM C module. Finally, total C stock gradually increased to 1749.8 Tg C in 2050 at a rate of 2.5 Tg C yr-1 and it might be attributed to mature of forest. However, total forest C stock might be overestimated in the future due to the exclusion of disturbance in simulation. This study was supported by Korea Forest Service (S111315L100120) and Korean Ministry of Environment (2014001310008).
Developing a bubble number-density paleoclimatic indicator for glacier ice
Spencer, M.K.; Alley, R.B.; Fitzpatrick, J.J.
2006-01-01
Past accumulation rate can be estimated from the measured number-density of bubbles in an ice core and the reconstructed paleotemperature, using a new technique. Density increase and grain growth in polar firn are both controlled by temperature and accumulation rate, and the integrated effects are recorded in the number-density of bubbles as the firn changes to ice. An empirical model of these processes, optimized to fit published data on recently formed bubbles, reconstructs accumulation rates using recent temperatures with an uncertainty of 41% (P < 0.05). For modern sites considered here, no statistically significant trend exists between mean annual temperature and the ratio of bubble number-density to grain number-density at the time of pore close-off; optimum modeled accumulation-rate estimates require an eventual ???2.02 ?? 0.08 (P < 0.05) bubbles per close-off grain. Bubble number-density in the GRIP (Greenland) ice core is qualitatively consistent with independent estimates for a combined temperature decrease and accumulation-rate increase there during the last 5 kyr.
Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles
2016-01-01
Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830
Using SAS PROC MCMC for Item Response Theory Models
Samonte, Kelli
2014-01-01
Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian methods in the context of item response theory to serve as a useful guide for practitioners in estimating and interpreting item response theory (IRT) models. Included is a description of the estimation procedure used by SAS PROC MCMC. Syntax is provided for estimation of both dichotomous and polytomous IRT models, as well as a discussion on how to extend the syntax to accommodate more complex IRT models. PMID:29795834
Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab
2014-01-01
Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.
Subduction in an Eddy-Resolving State Estimate of the Northeast Atlantic Ocean
NASA Technical Reports Server (NTRS)
Gebbie, Geoffrey
2004-01-01
Are eddies an important contributor to subduction in the eastern subtropical gyre? Here, an adjoint model is used to combine a regional, eddy-resolving numerical model with observations to produce a state estimate of the ocean circulation. The estimate is a synthesis of a variety of in- situ observations from the Subduction Experiment, TOPEX/POSEIDON altimetry, and the MTI General Circulation Model. The adjoint method is successful because the Northeast Atlantic Ocean is only weakly nonlinear. The state estimate provides a physically-interpretable, eddy-resolving information source to diagnose subduction. Estimates of eddy subduction for the eastern subtropical gyre of the North Atlantic are larger than previously calculated from parameterizations in coarse-resolution models. Furthermore, eddy subduction rates have typical magnitudes of 15% of the total subduction rate. Eddies contribute as much as 1 Sverdrup to water-mass transformation, and hence subduction, in the North Equatorial Current and the Azores Current. The findings of this thesis imply that the inability to resolve or accurately parameterize eddy subduction in climate models would lead to an accumulation of error in the structure of the main thermocline, even in the relatively-quiescent eastern subtropical gyre.
Modeling structured population dynamics using data from unmarked individuals
Grant, Evan H. Campbell; Zipkin, Elise; Thorson, James T.; See, Kevin; Lynch, Heather J.; Kanno, Yoichiro; Chandler, Richard; Letcher, Benjamin H.; Royle, J. Andrew
2014-01-01
The study of population dynamics requires unbiased, precise estimates of abundance and vital rates that account for the demographic structure inherent in all wildlife and plant populations. Traditionally, these estimates have only been available through approaches that rely on intensive mark–recapture data. We extended recently developed N-mixture models to demonstrate how demographic parameters and abundance can be estimated for structured populations using only stage-structured count data. Our modeling framework can be used to make reliable inferences on abundance as well as recruitment, immigration, stage-specific survival, and detection rates during sampling. We present a range of simulations to illustrate the data requirements, including the number of years and locations necessary for accurate and precise parameter estimates. We apply our modeling framework to a population of northern dusky salamanders (Desmognathus fuscus) in the mid-Atlantic region (USA) and find that the population is unexpectedly declining. Our approach represents a valuable advance in the estimation of population dynamics using multistate data from unmarked individuals and should additionally be useful in the development of integrated models that combine data from intensive (e.g., mark–recapture) and extensive (e.g., counts) data sources.
Potential barge transportation for inbound corn and grain
DOT National Transportation Integrated Search
1997-12-31
This research develops a model for estimating future barge and rail rates for decision making. The Box-Jenkins and the Regression Analysis with ARIMA errors forecasting methods were used to develop appropriate models for determining future rates. A s...
Factors associated with automobile accidents and survival.
Kim, Hong Sok; Kim, Hyung Jin; Son, Bongsoo
2006-09-01
This paper develops an econometric model for vehicles' inherent mortality rate and estimates the probability of accidents and survival in the United States. Logistic regression model is used to estimate probability of survival, and censored regression model is used to estimate probability of accidents. The estimation results indicated that the probability of accident and survival are influenced by the physical characteristics of the vehicles involved in the accident, and by the characteristics of the driver and the occupants. Using restrain system and riding in heavy vehicle increased the survival rate. Middle-aged drivers are less susceptible to involve in an accident, and surprisingly, female drivers are more likely to have an accident than male drivers. Riding in powerful vehicles (high horsepower) and driving late night increase the probability of accident. Overall, the driving behavior and characteristics of vehicle does matter and affects the probabilities of having a fatal accident for different types of vehicles.
Hossain, Md. Kamrul; Kamil, Anton Abdulbasah; Baten, Md. Azizul; Mustafa, Adli
2012-01-01
The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989–2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation. PMID:23077500
LANDFILL GAS EMISSIONS MODEL (LANDGEM) VERSION 3.02 USER'S GUIDE
The Landfill Gas Emissions Model (LandGEM) is an automated estimation tool with a Microsoft Excel interface that can be used to estimate emission rates for total landfill gas, methane, carbon dioxide, nonmethane organic compounds, and individual air pollutants from municipal soli...
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Dewji, S.; Bellamy, M.; Hertel, N.; ...
2015-03-25
The purpose of this study is to estimate dose rates that may result from exposure to patients who had been administered iodine-131 ( 131I) as part of medical therapy were calculated. These effective dose rate estimates were compared with simplified assumptions under United States Nuclear Regulatory Commission Regulatory Guide 8.39, which does not consider body tissue attenuation nor time-dependent redistribution and excretion of the administered 131I. Methods: Dose rates were estimated for members of the public potentially exposed to external irradiation from patients recently treated with 131I. Tissue attenuation and iodine biokinetics were considered in the patient in a largermore » comprehensive effort to improve external dose rate estimates. The external dose rate estimates are based on Monte Carlo simulations using the Phantom with Movable Arms and Legs (PIMAL), previously developed by Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission. PIMAL was employed to model the relative positions of the 131I patient and members of the public in three exposure scenarios: (1) traveling on a bus in a total of six seated or standing permutations, (2) two nursing home cases where a caregiver is seated at 30 cm from the patient’s bedside and a nursing home resident seated 250 cm away from the patient in an adjacent bed, and (3) two hotel cases where the patient and a guest are in adjacent rooms with beds on opposite sides of the common wall, with the patient and guest both in bed and either seated back-to-back or lying head to head. The biokinetic model predictions of the retention and distribution of 131I in the patient assumed a single voiding of urinary bladder contents that occurred during the trip at 2, 4, or 8 h after 131I administration for the public transportation cases, continuous first-order voiding for the nursing home cases, and regular periodic voiding at 4, 8, or 12 h after administration for the hotel room cases. Organ specific activities of 131I in the thyroid, bladder, and combined remaining tissues were calculated as a function of time after administration. Exposures to members of the public were considered for 131I patients with normal thyroid uptake (peak thyroid uptake of ~27% of administered 131I), differentiated thyroid cancer (DTC, 5% uptake), and hyperthyroidism (80% uptake). Results: The scenario with the patient seated behind the member of the public yielded the highest dose rate estimate of seated public transportation exposure cases. The dose rate to the adjacent room guest was highest for the exposure scenario in which the hotel guest and patient are seated by a factor of ~4 for the normal and differentiated thyroid cancer uptake cases and by a factor of ~3 for the hyperthyroid case. Conclusions: It was determined that for all modeled cases, the DTC case yielded the lowest external dose rates, whereas the hyperthyroid case yielded the highest dose rates. In estimating external dose to members of the public from patients with 131I therapy, consideration must be given to (patient- and case-specific) administered 131I activities and duration of exposure for a more complete estimate. The method implemented here included a detailed calculation model, which provides a means to determine dose rate estimates for a range of scenarios. Finally, the method was demonstrated for variations of three scenarios, showing how dose rates are expected to vary with uptake, voiding pattern, and patient location.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewji, S.; Bellamy, M.; Hertel, N.
The purpose of this study is to estimate dose rates that may result from exposure to patients who had been administered iodine-131 ( 131I) as part of medical therapy were calculated. These effective dose rate estimates were compared with simplified assumptions under United States Nuclear Regulatory Commission Regulatory Guide 8.39, which does not consider body tissue attenuation nor time-dependent redistribution and excretion of the administered 131I. Methods: Dose rates were estimated for members of the public potentially exposed to external irradiation from patients recently treated with 131I. Tissue attenuation and iodine biokinetics were considered in the patient in a largermore » comprehensive effort to improve external dose rate estimates. The external dose rate estimates are based on Monte Carlo simulations using the Phantom with Movable Arms and Legs (PIMAL), previously developed by Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission. PIMAL was employed to model the relative positions of the 131I patient and members of the public in three exposure scenarios: (1) traveling on a bus in a total of six seated or standing permutations, (2) two nursing home cases where a caregiver is seated at 30 cm from the patient’s bedside and a nursing home resident seated 250 cm away from the patient in an adjacent bed, and (3) two hotel cases where the patient and a guest are in adjacent rooms with beds on opposite sides of the common wall, with the patient and guest both in bed and either seated back-to-back or lying head to head. The biokinetic model predictions of the retention and distribution of 131I in the patient assumed a single voiding of urinary bladder contents that occurred during the trip at 2, 4, or 8 h after 131I administration for the public transportation cases, continuous first-order voiding for the nursing home cases, and regular periodic voiding at 4, 8, or 12 h after administration for the hotel room cases. Organ specific activities of 131I in the thyroid, bladder, and combined remaining tissues were calculated as a function of time after administration. Exposures to members of the public were considered for 131I patients with normal thyroid uptake (peak thyroid uptake of ~27% of administered 131I), differentiated thyroid cancer (DTC, 5% uptake), and hyperthyroidism (80% uptake). Results: The scenario with the patient seated behind the member of the public yielded the highest dose rate estimate of seated public transportation exposure cases. The dose rate to the adjacent room guest was highest for the exposure scenario in which the hotel guest and patient are seated by a factor of ~4 for the normal and differentiated thyroid cancer uptake cases and by a factor of ~3 for the hyperthyroid case. Conclusions: It was determined that for all modeled cases, the DTC case yielded the lowest external dose rates, whereas the hyperthyroid case yielded the highest dose rates. In estimating external dose to members of the public from patients with 131I therapy, consideration must be given to (patient- and case-specific) administered 131I activities and duration of exposure for a more complete estimate. The method implemented here included a detailed calculation model, which provides a means to determine dose rate estimates for a range of scenarios. Finally, the method was demonstrated for variations of three scenarios, showing how dose rates are expected to vary with uptake, voiding pattern, and patient location.« less
Spatiotemporal modelling of groundwater extraction in semi-arid central Queensland, Australia
NASA Astrophysics Data System (ADS)
Keir, Greg; Bulovic, Nevenka; McIntyre, Neil
2016-04-01
The semi-arid Surat Basin in central Queensland, Australia, forms part of the Great Artesian Basin, a groundwater resource of national significance. While this area relies heavily on groundwater supply bores to sustain agricultural industries and rural life in general, measurement of groundwater extraction rates is very limited. Consequently, regional groundwater extraction rates are not well known, which may have implications for regional numerical groundwater modelling. However, flows from a small number of bores are metered, and less precise anecdotal estimates of extraction are increasingly available. There is also an increasing number of other spatiotemporal datasets which may help predict extraction rates (e.g. rainfall, temperature, soils, stocking rates etc.). These can be used to construct spatial multivariate regression models to estimate extraction. The data exhibit complicated statistical features, such as zero-valued observations, non-Gaussianity, and non-stationarity, which limit the use of many classical estimation techniques, such as kriging. As well, water extraction histories may exhibit temporal autocorrelation. To account for these features, we employ a separable space-time model to predict bore extraction rates using the R-INLA package for computationally efficient Bayesian inference. A joint approach is used to model both the probability (using a binomial likelihood) and magnitude (using a gamma likelihood) of extraction. The correlation between extraction rates in space and time is modelled using a Gaussian Markov Random Field (GMRF) with a Matérn spatial covariance function which can evolve over time according to an autoregressive model. To reduce computational burden, we allow the GMRF to be evaluated at a relatively coarse temporal resolution, while still allowing predictions to be made at arbitrarily small time scales. We describe the process of model selection and inference using an information criterion approach, and present some preliminary results from the study area. We conclude by discussing issues related with upscaling of the modelling approach to the entire basin, including merging of extraction rate observations with different precision, temporal resolution, and even potentially different likelihoods.
Reyes, Elisabeth; Nadot, Sophie; von Balthazar, Maria; Schönenberger, Jürg; Sauquet, Hervé
2018-06-21
Ancestral state reconstruction is an important tool to study morphological evolution and often involves estimating transition rates among character states. However, various factors, including taxonomic scale and sampling density, may impact transition rate estimation and indirectly also the probability of the state at a given node. Here, we test the influence of rate heterogeneity using maximum likelihood methods on five binary perianth characters, optimized on a phylogenetic tree of angiosperms including 1230 species sampled from all families. We compare the states reconstructed by an equal-rate (Mk1) and a two-rate model (Mk2) fitted either with a single set of rates for the whole tree or as a partitioned model, allowing for different rates on five partitions of the tree. We find strong signal for rate heterogeneity among the five subdivisions for all five characters, but little overall impact of the choice of model on reconstructed ancestral states, which indicates that most of our inferred ancestral states are the same whether heterogeneity is accounted for or not.
Soil moisture data as a constraint for groundwater recharge estimation
NASA Astrophysics Data System (ADS)
Mathias, Simon A.; Sorensen, James P. R.; Butler, Adrian P.
2017-09-01
Estimating groundwater recharge rates is important for water resource management studies. Modeling approaches to forecast groundwater recharge typically require observed historic data to assist calibration. It is generally not possible to observe groundwater recharge rates directly. Therefore, in the past, much effort has been invested to record soil moisture content (SMC) data, which can be used in a water balance calculation to estimate groundwater recharge. In this context, SMC data is measured at different depths and then typically integrated with respect to depth to obtain a single set of aggregated SMC values, which are used as an estimate of the total water stored within a given soil profile. This article seeks to investigate the value of such aggregated SMC data for conditioning groundwater recharge models in this respect. A simple modeling approach is adopted, which utilizes an emulation of Richards' equation in conjunction with a soil texture pedotransfer function. The only unknown parameters are soil texture. Monte Carlo simulation is performed for four different SMC monitoring sites. The model is used to estimate both aggregated SMC and groundwater recharge. The impact of conditioning the model to the aggregated SMC data is then explored in terms of its ability to reduce the uncertainty associated with recharge estimation. Whilst uncertainty in soil texture can lead to significant uncertainty in groundwater recharge estimation, it is found that aggregated SMC is virtually insensitive to soil texture.
Schwindt, Adam R.; Winkelman, Dana L.
2016-01-01
Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L−1 and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
Sampling through time and phylodynamic inference with coalescent and birth–death models
Volz, Erik M.; Frost, Simon D. W.
2014-01-01
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173
An evaluation of sex-age-kill (SAK) model performance
Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent
2009-01-01
The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.
Zardad, Asma; Mohsin, Asma; Zaman, Khalid
2013-12-01
The purpose of this study is to investigate the factors that affect real exchange rate volatility for Pakistan through the co-integration and error correction model over a 30-year time period, i.e. between 1980 and 2010. The study employed the autoregressive conditional heteroskedasticity (ARCH), generalized autoregressive conditional heteroskedasticity (GARCH) and Vector Error Correction model (VECM) to estimate the changes in the volatility of real exchange rate series, while an error correction model was used to determine the short-run dynamics of the system. The study is limited to a few variables i.e., productivity differential (i.e., real GDP per capita relative to main trading partner); terms of trade; trade openness and government expenditures in order to manage robust data. The result indicates that real effective exchange rate (REER) has been volatile around its equilibrium level; while, the speed of adjustment is relatively slow. VECM results confirm long run convergence of real exchange rate towards its equilibrium level. Results from ARCH and GARCH estimation shows that real shocks volatility persists, so that shocks die out rather slowly, and lasting misalignment seems to have occurred.
Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, F. Landis
1997-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Jon; Cottrell, William; Shiu, Gary
The Coleman formula for vacuum decay and bubble nucleation has been used to estimate the tunneling rate in models of axion monodromy in recent literature. However, several of Coleman’s original assumptions do not hold for such models. Here we derive a new estimate with this in mind using a similar Euclidean procedure. We find that there are significant regions of parameter space for which the tunneling rate in axion monodromy is not well approximated by the Coleman formula. However, there is also a regime relevant to large field inflation in which both estimates parametrically agree. As a result, we alsomore » briefly comment on the applications of our results to the relaxion scenario.« less
NASA Astrophysics Data System (ADS)
Tsumune, Daisuke; Aoyama, Michio; Tsubono, Takaki; Tateda, Yutaka; Misumi, Kazuhiro; Hayami, Hiroshi; Toyoda, Yasuhiro; Maeda, Yoshiaki; Yoshida, Yoshikatsu; Uematsu, Mitsuo
2014-05-01
A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition. We reconstructed spatiotemporal variability of 137Cs activity in the ocean by the comparison model simulations and observed data. We employed a regional scale and the North Pacific scale oceanic dispersion models, an atmospheric transport model, a sediment transport model, a dynamic biological compartment model for marine biota and river runoff model to investigate the oceanic contamination. Direct releases of 137Cs were estimated for more than 2 years after the accident by comparing simulated results and observed activities very close to the site. The estimated total amounts of directly released 137Cs was 3.6±0.7 PBq. Directly release rate of 137Cs decreased exponentially with time by the end of December 2012 and then, was almost constant. The daily release rate of 137Cs was estimated to be 3.0 x 1010 Bq day-1 by the end of September 2013. The activity of directly released 137Cs was detectable only in the coastal zone after December 2012. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable, while simulated 137Cs activities attributable to atmospheric deposition were low compared to measured activities. The rate of atmospheric deposition onto the ocean was underestimated because of a lack of measurements of dose rate and air activity of 137Cs over the ocean when atmospheric deposition rates were being estimated. Observed 137Cs activities attributable to atmospheric deposition in the ocean helped to improve the accuracy of simulated atmospheric deposition rates. Although there is no observed data of 137Cs activity in the ocean from 11 to 21 March 2011, observed data of marine biota should reflect the history of 137Cs activity in this early period. The comparisons between simulated 137Cs activity of marine biota by a dynamic biological compartment and observed data also suggest that simulated 137Cs activity attributable to atmospheric deposition was underestimated in this early period. In addition, river runoff model simulations suggest that the river flux of 137Cs to the ocean was effective to the 137Cs activity in the ocean in this early period. The sediment transport model simulations suggests that the inventory of 137Cs in sediment was less than 10
Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System
NASA Technical Reports Server (NTRS)
Karlgaard, Chris; Schoenenberger, Mark
2017-01-01
This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.
A top-down approach to projecting market impacts of climate change
NASA Astrophysics Data System (ADS)
Lemoine, Derek; Kapnick, Sarah
2016-01-01
To evaluate policies to reduce greenhouse-gas emissions, economic models require estimates of how future climate change will affect well-being. So far, nearly all estimates of the economic impacts of future warming have been developed by combining estimates of impacts in individual sectors of the economy. Recent work has used variation in warming over time and space to produce top-down estimates of how past climate and weather shocks have affected economic output. Here we propose a statistical framework for converting these top-down estimates of past economic costs of regional warming into projections of the economic cost of future global warming. Combining the latest physical climate models, socioeconomic projections, and economic estimates of past impacts, we find that future warming could raise the expected rate of economic growth in richer countries, reduce the expected rate of economic growth in poorer countries, and increase the variability of growth by increasing the climate's variability. This study suggests we should rethink the focus on global impacts and the use of deterministic frameworks for modelling impacts and policy.
Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M
2015-11-01
The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates. Copyright © 2015 Elsevier B.V. All rights reserved.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...
2016-02-24
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Wilkins, Mark E.; Rose, Craig S.; Coon, Catherine
2011-11-01
The abundance of some marine fish species are correlated to the abundance of habitat-forming benthic organisms such as sponges and corals. A concern for fisheries management agencies is the recovery of these benthic invertebrates from removal or mortality from bottom trawling and other commercial fisheries activities. Using a logistic model, observations of available substrate and data from bottom trawl surveys of the Aleutian Islands, Alaska, we estimated recovery rates of sponges and corals following removal. The model predicted the observed sponge and coral catch in bottom trawl surveys relatively accurately ( R2=0.38 and 0.46). For sponges, the results show that intrinsic growth rates were slow ( r=0.107 yr -1). Results show that intrinsic growth rates of corals were also slow ( r=0.062 yr -1). The best models for corals and sponges were models that did not include the impacts of commercial fishing removals. Subsequent recovery times for both taxa were also predicted to be slow. Mortality of 67% of the initial sponge biomass would recover to 80% of the original biomass after 20 years, while mortality of 67% of the coral biomass would recover to 80% of the original biomass after 34 years. The modeled recovery times were consistent with previous studies in estimating that recovery times were of the order of decades, however improved data from directed studies would no doubt improve parameter estimates and reduce the uncertainty in the model results. Given their role as a major ecosystem component and potential habitat for marine fish, damage and removal of sponges and corals must be considered when estimating the impacts of commercial bottom trawling on the seafloor.
NASA Astrophysics Data System (ADS)
Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong
2018-02-01
Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.
Comparison of three nonlinear models to describe long-term tag shedding by lake trout
Fabrizio, Mary C.; Swanson, Bruce L.; Schram, Stephen T.; Hoff, Michael H.
1996-01-01
We estimated long-term tag-shedding rates for lake trout Salvelinus namaycush using two existing models and a model we developed to account for the observed permanence of some tags. Because tag design changed over the course of the study, we examined tag-shedding rates for three types of numbered anchor tags (Floy tags FD-67, FD-67C, and FD-68BC) and an unprinted anchor tag (FD-67F). Lake trout from the Gull Island Shoal region, Lake Superior, were double-tagged, and subsequent recaptures were monitored in annual surveys conducted from 1974 to 1992. We modeled tag-shedding rates, using time at liberty and probabilities of tag shedding estimated from fish released in 1974 and 1978–1983 and later recaptured. Long-term shedding of numbered anchor tags in lake trout was best described by a nonlinear model with two parameters: an instantaneous tag-shedding rate and a constant representing the proportion of tags that were never shed. Although our estimates of annual shedding rates varied with tag type (0.300 for FD-67, 0.441 for FD-67C, and 0.656 for FD-68BC), differences were not significant. About 36% of tags remained permanently affixed to the fish. Of the numbered tags that were shed (about 64%), two mechanisms contributed to tag loss: disintegration and dislodgment. Tags from about 11% of recaptured fish had disintegrated, but most tags were dislodged. Unprinted tags were shed at a significant but low rate immediately after release, but the long-term, annual shedding rate of these tags was only 0.013. Compared with unprinted tags, numbered tags dislodged at higher annual rates; we hypothesized that this was due to the greater frictional drag associated with the larger cross-sectional area of numbered tags.
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model
NASA Astrophysics Data System (ADS)
Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd
2017-09-01
Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.
Shin, Hyeong-Moo; McKone, Thomas E.; Nishioka, Marcia G.; Fallin, M. Daniele; Croen, Lisa A.; Hertz-Picciotto, Irva; Newschaffer, Craig J.; Bennett, Deborah H.
2014-01-01
Consumer products and building materials emit a number of semivolatile organic compounds (SVOCs) in the indoor environment. Because indoor SVOCs accumulate in dust, we explore the use of dust to determine source strength and report here on analysis of dust samples collected in 30 U.S. homes for six phthalates, four personal care product ingredients, and five flame retardants. We then use a fugacity-based indoor mass-balance model to estimate the whole house emission rates of SVOCs that would account for the measured dust concentrations. Di-2-ethylhexyl phthalate (DEHP) and di-iso-nonyl phthalate (DiNP) were the most abundant compounds in these dust samples. On the other hand, the estimated emission rate of diethyl phthalate (DEP) is the largest among phthalates, although its dust concentration is over two orders of magnitude smaller than DEHP and DiNP. The magnitude of the estimated emission rate that corresponds to the measured dust concentration is found to be inversely correlated with the vapor pressure of the compound, indicating that dust concentrations alone cannot be used to determine which compounds have the greatest emission rates. The combined dust-assay modeling approach shows promise for estimating indoor emission rates for SVOCs. PMID:24118221
The reliability of physiologically based pharmacokinetic (PBPK) models is directly related to the accuracy of the metabolic rate parameters used as model inputs. When metabolic rate parameters derived from in vivo experiments are unavailable, they can be estimated from in vitro d...
Air-water Gas Exchange Rates on a Large Impounded River Measured Using Floating Domes (Poster)
Mass balance models of dissolved gases in rivers typically serve as the basis for whole-system estimates of greenhouse gas emission rates. An important component of these models is the exchange of dissolved gases between air and water. Controls on gas exchange rates (K) have be...
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Creating a stage-based deterministic PVA model - the western prairie fringed orchid [Exercise 12
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
Contemporary efforts to conserve populations and species often employ population viability analysis (PVA), a specific application of population modeling that estimates the effects of environmental and demographic processes on population growth rates. These models can also be used to estimate probabilities that a population will fall below a certain level. This...
Sam Rossman; Charles B. Yackulic; Sarah P. Saunders; Janice Reid; Ray Davis; Elise F. Zipkin
2016-01-01
Occupancy modeling is a widely used analytical technique for assessing species distributions and range dynamics. However, occupancy analyses frequently ignore variation in abundance of occupied sites, even though site abundances affect many of the parameters being estimated (e.g., extinction, colonization, detection probability). We introduce a new model (âdynamic
A smoothed residual based goodness-of-fit statistic for nest-survival models
Rodney X. Sturdivant; Jay J. Rotella; Robin E. Russell
2008-01-01
Estimating nest success and identifying important factors related to nest-survival rates is an essential goal for many wildlife researchers interested in understanding avian population dynamics. Advances in statistical methods have led to a number of estimation methods and approaches to modeling this problem. Recently developed models allow researchers to include a...
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.
Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363
Kriticos, Darren J.; Leriche, Agathe; Palmer, David J.; Cook, David C.; Brockerhoff, Eckehard G.; Stephens, Andréa E. A.; Watt, Michael S.
2013-01-01
Biosecurity agencies need robust bioeconomic tools to help inform policy and allocate scarce management resources. They need to estimate the potential for each invasive alien species (IAS) to create negative impacts, so that relative and absolute comparisons can be made. Using pine processionary moth (Thaumetopoea pityocampa sensu lato) as an example, these needs were met by combining species niche modelling, dispersal modelling, host impact and economic modelling. Within its native range (the Mediterranean Basin and adjacent areas), T. pityocampa causes significant defoliation of pines and serious urticating injuries to humans. Such severe impacts overseas have fuelled concerns about its potential impacts, should it be introduced to New Zealand. A stochastic bioeconomic model was used to estimate the impact of PPM invasion in terms of pine production value lost due to a hypothetical invasion of New Zealand by T. pityocampa. The bioeconomic model combines a semi-mechanistic niche model to develop a climate-related damage function, a climate-related forest growth model, and a stochastic spread model to estimate the present value (PV) of an invasion. Simulated invasions indicate that Thaumetopoea pityocampa could reduce New Zealand’s merchantable and total pine stem volume production by 30%, reducing forest production by between NZ$1,550 M to NZ$2,560 M if left untreated. Where T. pityocampa is controlled using aerial application of an insecticide, projected losses in PV were reduced, but still significant (NZ$30 M to NZ$2,210 M). The PV estimates were more sensitive to the efficacy of the spray program than the potential rate of spread of the moth. Our novel bioeconomic method provides a refined means of estimating potential impacts of invasive alien species, taking into account climatic effects on asset values, the potential for pest impacts, and pest spread rates. PMID:23405097
Truong, Khoa D; Reifsnider, Odette S; Mayorga, Maria E; Spitler, Hugh
2013-05-01
The objective of this study was to estimate the aggregate burden of maternal binge drinking on preterm birth (PTB) and low birth weight (LBW) across American sociodemographic groups in 2008. To estimate the aggregate burden of maternal binge drinking on preterm birth (PTB) and low birth weight (LBW) across American sociodemographic groups in 2008. A simulation model was developed to estimate the number of PTB and LBW cases due to maternal binge drinking. Data inputs for the model included number of births and rates of preterm and LBW from the National Center for Health Statistics; female population by childbearing age groups from the U.S. Census; increased relative risks of preterm and LBW deliveries due to maternal binge drinking extracted from the literature; and adjusted prevalence of binge drinking among pregnant women estimated in a multivariate logistic regression model using Behavioral Risk Factor Surveillance System survey. The most conservative estimates attributed maternal binge drinking to 8,701 (95% CI: 7,804-9,598) PTBs (1.75% of all PTBs) and 5,627 (95% CI 5,121-6,133) LBW deliveries in 2008, with 3,708 (95% CI: 3,375-4,041) cases of both PTB and LBW. The estimated rate of PTB due to maternal binge drinking was 1.57% among all PTBs to White women, 0.69% among Black women, 3.31% among Hispanic women, and 2.35% among other races. Compared to other age groups, women ages 40-44 had the highest adjusted binge drinking rate and highest PTB rate due to maternal binge drinking (4.33%). Maternal binge drinking contributed significantly to PTB and LBW differentially across sociodemographic groups.
Some analytical models to estimate maternal age at birth using age-specific fertility rates.
Pandey, A; Suchindran, C M
1995-01-01
"A class of analytical models to study the distribution of maternal age at different births from the data on age-specific fertility rates has been presented. Deriving the distributions and means of maternal age at birth of any specific order, final parity and at next-to-last birth, we have extended the approach to estimate parity progression ratios and the ultimate parity distribution of women in the population.... We illustrate computations of various components of the model expressions with the current fertility experiences of the United States for 1970." excerpt
Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas
2017-03-01
Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.
Estimating time-varying conditional correlations between stock and foreign exchange markets
NASA Astrophysics Data System (ADS)
Tastan, Hüseyin
2006-02-01
This study explores the dynamic interaction between stock market returns and changes in nominal exchange rates. Many financial variables are known to exhibit fat tails and autoregressive variance structure. It is well-known that unconditional covariance and correlation coefficients also vary significantly over time and multivariate generalized autoregressive model (MGARCH) is able to capture the time-varying variance-covariance matrix for stock market returns and changes in exchange rates. The model is applied to daily Euro-Dollar exchange rates and two stock market indexes from the US economy: Dow-Jones Industrial Average Index and S&P500 Index. The news impact surfaces are also drawn based on the model estimates to see the effects of idiosyncratic shocks in respective markets.
GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone
Stuart, W.D.
2001-01-01
Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.
Short-Term Uplift Rates and the Mountain Building Process in Southern Alaska
NASA Technical Reports Server (NTRS)
Sauber, Jeanne; Herring, Thomas A.; Meigs, Andrew; Meigs, Andrew
1998-01-01
We have used GPS at 10 stations in southern Alaska with three epochs of measurements to estimate short-term uplift rates. A number of great earthquakes as well as recent large earthquakes characterize the seismicity of the region this century. To reliably estimate uplift rates from GPS data, numerical models that included both the slip distribution in recent large earthquakes and the general slab geometry were constructed.
Truong, Khoa D; Reifsnider, Odette S; Mayorga, Maria E; Spitler, Hugh
2013-01-01
Objective To estimate the aggregate burden of maternal binge drinking on preterm birth (PTB) and low birth weight (LBW) across American sociodemographic groups in 2008. Methods A simulation model was developed to estimate the number of PTB and LBW cases due to maternal binge drinking. Data inputs for the model included number of births and rates of preterm and LBW from the National Center for Health Statistics; female population by childbearing age groups from the U.S. Census; increased relative risks of preterm and LBW deliveries due to maternal binge drinking extracted from the literature; and adjusted prevalence of binge drinking among pregnant women estimated in a multivariate logistic regression model using Behavioral Risk Factor Surveillance System survey. Results The most conservative estimates attributed maternal binge drinking to 8,701 (95% CI: 7,804–9,598) PTBs (1.75% of all PTBs) and 5,627 (95% CI 5,121–6,133) LBW deliveries in 2008, with 3,708 (95% CI: 3,375–4,041) cases of both PTB and LBW. The estimated rate of PTB due to maternal binge drinking was 1.57% among all PTBs to White women, 0.69% among Black women, 3.31% among Hispanic women, and 2.35% among other races. Compared to other age groups, women ages 40–44 had the highest adjusted binge drinking rate and highest PTB rate due to maternal binge drinking (4.33%). Conclusion Maternal binge drinking contributed significantly to PTB and LBW differentially across sociodemographic groups. PMID:22711260
NASA Astrophysics Data System (ADS)
Wang, S.
2014-12-01
Atmospheric ammonia (NH3) plays an important role in fine particle formation. Accurate estimates of ammonia can reduce uncertainties in air quality modeling. China is one of the largest countries emitting ammonia with the majority of NH3 emissions coming from the agricultural practices, such as fertilizer applications and animal operations. The current ammonia emission estimates in China are mainly based on pre-defined emission factors. Thus, there are considerable uncertainties in estimating NH3 emissions, especially in time and space distribution. For example, fertilizer applications vary in the date of application and amount by geographical regions and crop types. In this study, the NH3 emission from the agricultural fertilizer use in China of 2011 was estimated online by an agricultural fertilizer modeling system coupling a regional air-quality model and an agro-ecosystem model, which contains three main components 1) the Environmental Policy Integrated Climate (EPIC) model, 2) the meso-scale meteorology Weather Research and Forecasting (WRF) model and 3) the CMAQ air quality model with bi-directional ammonia fluxes. The EPIC output information about daily fertilizer application and soil characteristics would be the input of the CMAQ model. In order to run EPIC model, much Chinese local information is collected and processed. For example, Crop land data are computed from the MODIS land use data at 500-m resolution and crop categories at Chinese county level; the fertilizer use rate for different fertilizer types, crops and provinces are obtained from Chinese statistic materials. The system takes into consideration many influencing factors on agriculture ammonia emission, including weather, the fertilizer application method, timing, amount, and rate for specific pastures and crops. The simulated fertilizer data is compared with the NH3 emissions and fertilizer application data from other sources. The results of CMAQ modeling are also discussed and analyzed with field measurements. The estimated agricultural fertilizer NH3 emission in this study is about 3Tg in 2011. The regions with the highest emission rates are located in the North China Plain. Monthly, the peak ammonia emissions occur in April to July.
Are Plant Species Able to Keep Pace with the Rapidly Changing Climate?
Cunze, Sarah; Heydel, Felix; Tackenberg, Oliver
2013-01-01
Future climate change is predicted to advance faster than the postglacial warming. Migration may therefore become a key driver for future development of biodiversity and ecosystem functioning. For 140 European plant species we computed past range shifts since the last glacial maximum and future range shifts for a variety of Intergovernmental Panel on Climate Change (IPCC) scenarios and global circulation models (GCMs). Range shift rates were estimated by means of species distribution modelling (SDM). With process-based seed dispersal models we estimated species-specific migration rates for 27 dispersal modes addressing dispersal by wind (anemochory) for different wind conditions, as well as dispersal by mammals (dispersal on animal's coat – epizoochory and dispersal by animals after feeding and digestion – endozoochory) considering different animal species. Our process-based modelled migration rates generally exceeded the postglacial range shift rates indicating that the process-based models we used are capable of predicting migration rates that are in accordance with realized past migration. For most of the considered species, the modelled migration rates were considerably lower than the expected future climate change induced range shift rates. This implies that most plant species will not entirely be able to follow future climate-change-induced range shifts due to dispersal limitation. Animals with large day- and home-ranges are highly important for achieving high migration rates for many plant species, whereas anemochory is relevant for only few species. PMID:23894290
Estimating prefledging survival: Allowing for brood mixing and dependence among brood mates
Flint, Paul L.; Pollock, Kenneth H.; Thomas, Dana; Sedinger, James S.
1995-01-01
Estimates of juvenile survival from hatch to fledging provide important information on waterfowl productivity. We develop a model for estimating survival of young waterfowl from hatch to fledging. Our model enables interchange of individuals among broods and relaxes the assumption that individuals within broods have independent survival probabilities. The model requires repeated observations of individually identifiable adults and their offspring that are not individually identifiable. A modified Kaplan-Meier procedure (Pollock et al. 1989a,b) and a modified Mayfield procedure (Mayfield 1961, 1975; Johnson 1979) can be used under this general modeling framework, and survival rates and corresponding variances of the point estimators can be determined.
NASA Technical Reports Server (NTRS)
Chappell, Lori J.; Cucinotta, Francis A.
2011-01-01
Radiation risks are estimated in a competing risk formalism where age or time after exposure estimates of increased risks for cancer and circulatory diseases are folded with a probability to survive to a given age. The survival function, also called the life-table, changes with calendar year, gender, smoking status and other demographic variables. An outstanding problem in risk estimation is the method of risk transfer between exposed populations and a second population where risks are to be estimated. Approaches used to transfer risks are based on: 1) Multiplicative risk transfer models -proportional to background disease rates. 2) Additive risk transfer model -risks independent of background rates. In addition, a Mixture model is often considered where the multiplicative and additive transfer assumptions are given weighted contributions. We studied the influence of the survival probability on the risk of exposure induced cancer and circulatory disease morbidity and mortality in the Multiplicative transfer model and the Mixture model. Risks for never-smokers (NS) compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for NS, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity, esophagus, colon, a portion of the solid cancer remainder, and leukemia. Greater improvements in risk estimates for NS s are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).
Bermingham, Jacqueline F; Chen, Yuen Y; McIntosh, Robert L; Wood, Andrew W
2014-04-01
Fluorescent intensity of the dye Rhodamine-B (Rho-B) decreases with increasing temperature. We show that in fresh rat brain tissue samples in a custom-made radiofrequency (RF) tissue exposure device, temperature rise due to RF radiation as measured by absorbed dye correlates well with temperature measured nearby by fiber optic probes. Estimates of rate of initial temperature rise (using both probe measurement and the dye method) accord well with estimates of local specific energy absorption rate (SAR). We also modeled the temperature characteristics of the exposure device using combined electromagnetic and finite-difference thermal modeling. Although there are some differences in the rate of cooling following cessation of RF exposure, there is reasonable agreement between modeling and both probe measurement and dye estimation of temperature. The dye method also permits measurement of regional temperature rise (due to RF). There is no clear evidence of local differential RF absorption, but further refinement of the method may be needed to fully clarify this issue. © 2014 Wiley Periodicals, Inc.
The economic impact of state ordered avoided cost rates for photovoltaic generated electricity
NASA Astrophysics Data System (ADS)
Bottaro, D.; Wheatley, N. J.
Various methods the states have devised to implement federal policy regarding the Public Utility Regulatory Policies Act (PURPA) of 1978, which requires that utilities pay their full 'avoided costs' to small power producers for the energy and capacity provided, are examined. The actions of several states are compared with rates estimated using utility expansion and rate-setting models, and the potential break-even capital costs of a photovoltaic system are estimated using models which calculate photovoltaic worth. The potential for the development of photovoltaics has been increased by the PURPA regulations more from the guarantee of utility purchase of photovoltaic power than from the high buy-back rates paid. The buy-back rate is high partly because of the surprisingly high effective capacity of photovoltaic systems in some locations.
NASA Astrophysics Data System (ADS)
Ivy, D. J.; Rigby, M. L.; Prinn, R. G.; Muhle, J.; Weiss, R. F.
2009-12-01
We present optimized annual global emissions from 1973-2008 of nitrogen trifluoride (NF3), a powerful greenhouse gas which is not currently regulated by the Kyoto Protocol. In the past few decades, NF3 production has dramatically increased due to its usage in the semiconductor industry. Emissions were estimated through the 'pulse-method' discrete Kalman filter using both a simple, flexible 2-D 12-box model used in the Advanced Global Atmospheric Gases Experiment (AGAGE) network and the Model for Ozone and Related Tracers (MOZART v4.5), a full 3-D atmospheric chemistry model. No official audited reports of industrial NF3 emissions are available, and with limited information on production, a priori emissions were estimated using both a bottom-up and top-down approach with two different spatial patterns based on semiconductor perfluorocarbon (PFC) emissions from the Emission Database for Global Atmospheric Research (EDGAR v3.2) and Semiconductor Industry Association sales information. Both spatial patterns used in the models gave consistent results, showing the robustness of the estimated global emissions. Differences between estimates using the 2-D and 3-D models can be attributed to transport rates and resolution differences. Additionally, new NF3 industry production and market information is presented. Emission estimates from both the 2-D and 3-D models suggest that either the assumed industry release rate of NF3 or industry production information is still underestimated.
Empirical evaluation of the market price of risk using the CIR model
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Torosantucci, L.; Uboldi, A.
2007-03-01
We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.
Marques da Silva, Richarde; Guimarães Santos, Celso Augusto; Carneiro de Lima Silva, Valeriano; Pereira e Silva, Leonardo
2013-11-01
This study evaluates erosivity, surface runoff generation, and soil erosion rates for Mamuaba catchment, sub-catchment of Gramame River basin (Brazil) by using the ArcView Soil and Water Assessment Tool (AvSWAT) model. Calibration and validation of the model was performed on monthly basis, and it could simulate surface runoff and soil erosion to a good level of accuracy. Daily rainfall data between 1969 and 1989 from six rain gauges were used, and the monthly rainfall erosivity of each station was computed for all the studied years. In order to evaluate the calibration and validation of the model, monthly runoff data between January 1978 and April 1982 from one runoff gauge were used as well. The estimated soil loss rates were also realistic when compared to what can be observed in the field and to results from previous studies around of catchment. The long-term average soil loss was estimated at 9.4 t ha(-1) year(-1); most of the area of the catchment (60%) was predicted to suffer from a low- to moderate-erosion risk (<6 t ha(-1) year(-1)) and, in 20% of the catchment, the soil erosion was estimated to exceed > 12 t ha(-1) year(-1). Expectedly, estimated soil loss was significantly correlated with measured rainfall and simulated surface runoff. Based on the estimated soil loss rates, the catchment was divided into four priority categories (low, moderate, high and very high) for conservation intervention. The study demonstrates that the AvSWAT model provides a useful tool for soil erosion assessment from catchments and facilitates the planning for a sustainable land management in northeastern Brazil.
NASA Astrophysics Data System (ADS)
Tran, H.; Mansfield, M. L.; Lyman, S. N.; O'Neil, T.; Jones, C. P.
2015-12-01
Emissions from produced-water treatment ponds are poorly characterized sources in oil and gas emission inventories that play a critical role in studying elevated winter ozone events in the Uintah Basin, Utah, U.S. Information gaps include un-quantified amounts and compositions of gases emitted from these facilities. The emitted gases are often known as volatile organic compounds (VOCs) which, beside nitrogen oxides (NOX), are major precursors for ozone formation in the near-surface layer. Field measurement campaigns using the flux-chamber technique have been performed to measure VOC emissions from a limited number of produced water ponds in the Uintah Basin of eastern Utah. Although the flux chamber provides accurate measurements at the point of sampling, it covers just a limited area of the ponds and is prone to altering environmental conditions (e.g., temperature, pressure). This fact raises the need to validate flux chamber measurements. In this study, we apply an inverse-dispersion modeling technique with evacuated canister sampling to validate the flux-chamber measurements. This modeling technique applies an initial and arbitrary emission rate to estimate pollutant concentrations at pre-defined receptors, and adjusts the emission rate until the estimated pollutant concentrations approximates measured concentrations at the receptors. The derived emission rates are then compared with flux-chamber measurements and differences are analyzed. Additionally, we investigate the applicability of the WATER9 wastewater emission model for the estimation of VOC emissions from produced-water ponds in the Uintah Basin. WATER9 estimates the emission of each gas based on properties of the gas, its concentration in the waste water, and the characteristics of the influent and treatment units. Results of VOC emission estimations using inverse-dispersion and WATER9 modeling techniques will be reported.
Karakas, Filiz; Imamoglu, Ipek
2017-02-15
This study aims to estimate anaerobic dechlorination rate constants (k m ) of reactions of individual PCB congeners using data from four laboratory microcosms set up using sediment from Baltimore Harbor. Pathway k m values are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model (ADM) which can be applied to any halogenated hydrophobic organic (HOC). Improvements such as handling multiple dechlorination activities (DAs) and co-elution of congeners, incorporating constraints, using new goodness of fit evaluation led to an increase in accuracy, speed and flexibility of ADM. DAs published in the literature in terms of chlorine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The best fit explaining the congener pattern changes was found for pathways of Phylotype DEH10, which has the ability to remove doubly flanked chlorines in meta and para positions, para flanked chlorines in meta position. The range of estimated k m values is between 0.0001-0.133d -1 , the median of which is found to be comparable to the few available published biologically confirmed rate constants. Compound specific modelling studies such as that performed by ADM can enable monitoring and prediction of concentration changes as well as toxicity during bioremediation. Copyright © 2016 Elsevier B.V. All rights reserved.
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Forecasting the mortality rates of Malaysian population using Heligman-Pollard model
NASA Astrophysics Data System (ADS)
Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd
2017-08-01
Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
....e., in an across-the-board fashion. Id. at 2, 12, 35. The Postal Service states that the Governors... Services Worksheets USPS-R2010-4R/7 Product Cost & Contribution Estimation Model (Public Version) USPS... Product Cost & Contribution Estimation Model (Non- Public Version) USPS-R2010-4R/NP2 Cost Factor...
Population-level effects of the mysid, Americamysis bahia, exposed to varying thiobencarb concentrations were estimated using stage-structured matrix models. A deterministic density-independent matrix model estimated the decrease in population growth rate, l, with increas...
Anderson, Kyle R.; Poland, Michael
2016-01-01
Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35–100% between 2001 and 2006 (from 0.11–0.17 to 0.18–0.28 km3/yr), before subsequently decreasing to 0.08–0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60–150% between 2001 and 2006, and subsequent decline by as much as 60% by 2012. We also demonstrate the occurrence of temporal changes in the proportion of Kīlauea's magma supply that is stored versus erupted, with the supply “surge” in 2006 associated with increased accumulation of magma at the summit. Finally, we are able to place some constraints on sulfur concentrations in Kīlauea magma and the scrubbing of sulfur by the volcano's hydrothermal system. Multiphysical, Bayesian constraint on magma flow rates may be used to monitor evolving volcanic hazard not just at Kīlauea but at other volcanoes around the world.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Monitoring bald eagles using lists of nests: Response to Watts and Duerr
Sauer, John R.; Otto, Mark C.; Kendall, William L.; Zimmerman, Guthrie S.
2011-01-01
The post-delisting monitoring plan for bald eagles (Haliaeetus leucocephalus) roposed use of a dual-frame sample design, in which sampling of known nest sites in combination with additional area-based sampling is used to estimate total number of nesting bald eagle pairs. Watts and Duerr (2010) used data from repeated observations of bald eagle nests in Virginia, USA to estimate a nest turnover rate and used this rate to simulate decline in number of occupied nests in list nests over time. Results of Watts and Duerr suggest that, given the rates of loss of nests from the list of known nest sites in Virginia, the list information will be of little value to sampling unless lists are constantly updated. Those authors criticize the plan for not placing sufficient emphasis on updating and maintaining lists of bald eagle nests. Watts and Duerr's metric of turnover rate does not distinguish detectability or temporary nonuse of nests from permanent loss of nests and likely overestimates turnover rate. We describe a multi-state capture–recapture model that allows appropriate estimation of rates of loss of nests, and we use the model to estimate rates of loss from a sample of nests from Maine, USA. The post-delisting monitoring plan addresses the need to maintain and update the lists of nests, and we show that dual frame sampling is an effective approach for sampling nesting bald eagle populations.
Consensus models were developed to predict the bioconcentration of well-metabolized chemicals by rainbow trout. The models employ intrinsic clearance data from in vitro studies with liver S9 fractions or isolated hepatocytes to estimate a liver clearance rate which is extrapolat...
Martinez, N E; Johnson, T E; Pinder, J E
2016-01-01
This study compares three anatomical phantoms for rainbow trout (Oncorhynchus mykiss) for the purpose of estimating organ radiation dose and dose rates from molybdenum-99 ((99)Mo) uptake in the liver and GI tract. Model comparison and refinement is important to the process of determining accurate doses and dose rates to the whole body and the various organs. Accurate and consistent dosimetry is crucial to the determination of appropriate dose-effect relationships for use in environmental risk assessment. The computational phantoms considered are (1) a geometrically defined model employing anatomically relevant organ size and location, (2) voxel reconstruction of internal anatomy obtained from CT imaging, and (3) a new model utilizing NURBS surfaces to refine the model in (2). Dose Conversion Factors (DCFs) for whole body as well as selected organs of O. mykiss were computed using Monte Carlo modeling and combined with empirical models for predicting activity concentration to estimate dose rates and ultimately determine cumulative radiation dose (μGy) to selected organs after several half-lives of (99)Mo. The computational models provided similar results, especially for organs that were both the source and target of radiation (less than 30% difference between all models). Values in the empirical model as well as the 14 day cumulative organ doses determined from (99)Mo uptake are compared to similar models developed previously for (131)I. Finally, consideration is given to treating the GI tract as a solid organ compared to partitioning it into gut contents and GI wall, which resulted in an order of magnitude difference in estimated dose for most organs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluating growth of the Porcupine Caribou Herd using a stochastic model
Walsh, Noreen E.; Griffith, Brad; McCabe, Thomas R.
1995-01-01
Estimates of the relative effects of demographic parameters on population rates of change, and of the level of natural variation in these parameters, are necessary to address potential effects of perturbations on populations. We used a stochastic model, based on survival and reproduction estimates of the Porcupine Caribou (Rangifer tarandus granti) Herd (PCH), during 1983-89 and 1989-92 to obtain distributions of potential population rates of change (r). The distribution of r produced by 1,000 trajectories of our simulation model (1983-89, r̄ = 0.013; 1989-92, r̄ = 0.003) encompassed the rate of increase calculated from an independent series of photo-survey data over the same years (1983-89, r = 0.048; 1989-92, r = -0.035). Changes in adult female survival had the largest effect on r, followed by changes in calf survival. We hypothesized that petroleum development on calving grounds, or changes in calving and post-calving habitats due to global climate change, would affect model input parameters. A decline in annual adult female survival from 0.871 to 0.847, or a decline in annual calf survival from 0.518 to 0.472, would be sufficient to cause a declining population, if all other input estimates remained the same. We then used these lower survival rates, in conjunction with our estimated amount of among-year variation, to determine a range of resulting population trajectories. Stochastic models can be used to better understand dynamics of populations, optimize sampling investment, and evaluate potential effects of various factors on population growth.
A mathematical model of microalgae growth in cylindrical photobioreactor
NASA Astrophysics Data System (ADS)
Bakeri, Noorhadila Mohd; Jamaian, Siti Suhana
2017-08-01
Microalgae are unicellular organisms, which exist individually or in chains or groups but can be utilized in many applications. Researchers have done various efforts in order to increase the growth rate of microalgae. Microalgae have a potential as an effective tool for wastewater treatment, besides as a replacement for natural fuel such as coal and biodiesel. The growth of microalgae can be estimated by using Geider model, which this model is based on photosynthesis irradiance curve (PI-curve) and focused on flat panel photobioreactor. Therefore, in this study a mathematical model for microalgae growth in cylindrical photobioreactor is proposed based on the Geider model. The light irradiance is the crucial part that affects the growth rate of microalgae. The absorbed photon flux will be determined by calculating the average light irradiance in a cylindrical system illuminated by unidirectional parallel flux and considering the cylinder as a collection of differential parallelepipeds. Results from this study showed that the specific growth rate of microalgae increases until the constant level is achieved. Therefore, the proposed mathematical model can be used to estimate the rate of microalgae growth in cylindrical photobioreactor.
Estimating malaria transmission from humans to mosquitoes in a noisy landscape
Reiner, Robert C.; Guerra, Carlos; Donnelly, Martin J.; Bousema, Teun; Drakeley, Chris; Smith, David L.
2015-01-01
A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the ‘colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. PMID:26400195
History, Epidemic Evolution, and Model Burn-In for a Network of Annual Invasion: Soybean Rust.
Sanatkar, M R; Scoglio, C; Natarajan, B; Isard, S A; Garrett, K A
2015-07-01
Ecological history may be an important driver of epidemics and disease emergence. We evaluated the role of history and two related concepts, the evolution of epidemics and the burn-in period required for fitting a model to epidemic observations, for the U.S. soybean rust epidemic (caused by Phakopsora pachyrhizi). This disease allows evaluation of replicate epidemics because the pathogen reinvades the United States each year. We used a new maximum likelihood estimation approach for fitting the network model based on observed U.S. epidemics. We evaluated the model burn-in period by comparing model fit based on each combination of other years of observation. When the miss error rates were weighted by 0.9 and false alarm error rates by 0.1, the mean error rate did decline, for most years, as more years were used to construct models. Models based on observations in years closer in time to the season being estimated gave lower miss error rates for later epidemic years. The weighted mean error rate was lower in backcasting than in forecasting, reflecting how the epidemic had evolved. Ongoing epidemic evolution, and potential model failure, can occur because of changes in climate, host resistance and spatial patterns, or pathogen evolution.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
2008-01-01
An online brain-machine interface (BMI) in the form of a small vehicle, the 'RatCar,' has been developed. A rat had neural electrodes implanted in its primary motor cortex and basal ganglia regions to continuously record neural signals. Then, a linear state space model represents a correlation between the recorded neural signals and locomotion states (i.e., moving velocity and azimuthal variances) of the rat. The model parameters were set so as to minimize estimation errors, and the locomotion states were estimated from neural firing rates using a Kalman filter algorithm. The results showed a small oscillation to achieve smooth control of the vehicle in spite of fluctuating firing rates with noises applied to the model. Major variation of the model variables converged in a first 30 seconds of the experiments and lasted for the entire one hour session.
Hu, Jianhua; Wright, Fred A
2007-03-01
The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Hooten, M.; Higuera, P. E.; Marlon, J. R.; McLachlan, J. S.; Kelly, R.
2016-12-01
Sediment charcoal records are used in paleoecological analyses to identify individual local fire events and to estimate fire frequency and regional biomass burned at centennial to millenial time scales. Methods to identify local fire events based on sediment charcoal records have been well developed over the past 30 years, however, an integrated statistical framework for fire identification is still lacking. We build upon existing paleoecological methods to develop a hierarchical Bayesian point process model for local fire identification and estimation of fire return intervals. The model is unique in that it combines sediment charcoal records from multiple lakes across a region in a spatially-explicit fashion leading to estimation of a joint, regional fire return interval in addition to lake-specific local fire frequencies. Further, the model estimates a joint regional charcoal deposition rate free from the effects of local fires that can be used as a measure of regional biomass burned over time. Finally, the hierarchical Bayesian approach allows for tractable error propagation such that estimates of fire return intervals reflect the full range of uncertainty in sediment charcoal records. Specific sources of uncertainty addressed include sediment age models, the separation of local versus regional charcoal sources, and generation of a composite charcoal record The model is applied to sediment charcoal records from a dense network of lakes in the Yukon Flats region of Alaska. The multivariate joint modeling approach results in improved estimates of regional charcoal deposition with reduced uncertainty in the identification of individual fire events and local fire return intervals compared to individual lake approaches. Modeled individual-lake fire return intervals range from 100 to 500 years with a regional interval of roughly 200 years. Regional charcoal deposition to the network of lakes is correlated up to 50 kilometers. Finally, the joint regional charcoal deposition rate exhibits changes over time coincident with major climatic and vegetation shifts over the past 10,000 years. Ongoing work will use the regional charcoal deposition rate to estimate changes in biomass burned as a function of climate variability and regional vegetation pattern.
Bayesian Modeling of Exposure and Airflow Using Two-Zone Models
Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy
2009-01-01
Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of uncertainty and variability in these quantities can be used to not only provide better estimates of model outputs but also model parameters. PMID:19403840
Direct estimate of the spontaneous germ line mutation rate in African green monkeys.
Pfeifer, Susanne P
2017-12-01
Here, I provide the first direct estimate of the spontaneous mutation rate in an Old World monkey, using a seven individual, three-generation pedigree of African green monkeys. Eight de novo mutations were identified within ∼1.5 Gbp of accessible genome, corresponding to an estimated point mutation rate of 0.94 × 10 -8 per site per generation, suggesting an effective population size of ∼12000 for the species. This estimation represents a significant improvement in our knowledge of the population genetics of the African green monkey, one of the most important nonhuman primate models in biomedical research. Furthermore, by comparing mutation rates in Old World monkeys with the only other direct estimates in primates to date-humans and chimpanzees-it is possible to uniquely address how mutation rates have evolved over longer time scales. While the estimated spontaneous mutation rate for African green monkeys is slightly lower than the rate of 1.2 × 10 -8 per base pair per generation reported in chimpanzees, it is similar to the lower range of rates of 0.96 × 10 -8 -1.28 × 10 -8 per base pair per generation recently estimated from whole genome pedigrees in humans. This result suggests a long-term constraint on mutation rate that is quite different from similar evidence pertaining to recombination rate evolution in primates. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
An aerial sightability model for estimating ferruginous hawk population size
Ayers, L.W.; Anderson, S.H.
1999-01-01
Most raptor aerial survey projects have focused on numeric description of visibility bias without identifying the contributing factors or developing predictive models to account for imperfect detection rates. Our goal was to develop a sightability model for nesting ferruginous hawks (Buteo regalis) that could account for nests missed during aerial surveys and provide more accurate population estimates. Eighteen observers, all unfamiliar with nest locations in a known population, searched for nests within 300 m of flight transects via a Maule fixed-wing aircraft. Flight variables tested for their influence on nest-detection rates included aircraft speed, height, direction of travel, time of day, light condition, distance to nest, and observer experience level. Nest variables included status (active vs. inactive), condition (i.e., excellent, good, fair, poor, bad), substrate type, topography, and tree density. A multiple logistic regression model identified nest substrate type, distance to nest, and observer experience level as significant predictors of detection rates (P < 0.05). The overall model was significant (??26 = 124.4, P < 0.001, n = 255 nest observations), and the correct classification rate was 78.4%. During 2 validation surveys, observers saw 23.7% (14/59) and 36.5% (23/63) of the actual population. Sightability model predictions, with 90% confidence intervals, captured the true population in both tests. Our results indicate standardized aerial surveys, when used in conjunction with the predictive sightability model, can provide unbiased population estimates for nesting ferruginous hawks.
Palaeohistological Evidence for Ancestral High Metabolic Rate in Archosaurs.
Legendre, Lucas J; Guénard, Guillaume; Botha-Brink, Jennifer; Cubo, Jorge
2016-11-01
Metabolic heat production in archosaurs has played an important role in their evolutionary radiation during the Mesozoic, and their ancestral metabolic condition has long been a matter of debate in systematics and palaeontology. The study of fossil bone histology provides crucial information on bone growth rate, which has been used to indirectly investigate the evolution of thermometabolism in archosaurs. However, no quantitative estimation of metabolic rate has ever been performed on fossils using bone histological features. Moreover, to date, no inference model has included phylogenetic information in the form of predictive variables. Here we performed statistical predictive modeling using the new method of phylogenetic eigenvector maps on a set of bone histological features for a sample of extant and extinct vertebrates, to estimate metabolic rates of fossil archosauromorphs. This modeling procedure serves as a case study for eigenvector-based predictive modeling in a phylogenetic context, as well as an investigation of the poorly known evolutionary patterns of metabolic rate in archosaurs. Our results show that Mesozoic theropod dinosaurs exhibit metabolic rates very close to those found in modern birds, that archosaurs share a higher ancestral metabolic rate than that of extant ectotherms, and that this derived high metabolic rate was acquired at a much more inclusive level of the phylogenetic tree, among non-archosaurian archosauromorphs. These results also highlight the difficulties of assigning a given heat production strategy (i.e., endothermy, ectothermy) to an estimated metabolic rate value, and confirm findings of previous studies that the definition of the endotherm/ectotherm dichotomy may be ambiguous. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evaluation of a chinook salmon (Oncorhynchus tshawytscha) bioenergetics model
Madenjian, Charles P.; O'Connor, Daniel V.; Chernyak, Sergei M.; Rediske, Richard R.; O'Keefe, James P.
2004-01-01
We evaluated the Wisconsin bioenergetics model for chinook salmon (Oncorhynchus tshawytscha) in both the laboratory and the field. Chinook salmon in laboratory tanks were fed alewife (Alosa pseudoharengus), the predominant food of chinook salmon in Lake Michigan. Food consumption and growth by chinook salmon during the experiment were measured. To estimate the efficiency with which chinook salmon retain polychlorinated biphenyls (PCBs) from their food in the laboratory, PCB concentrations of the alewife and of the chinook salmon at both the beginning and end of the experiment were determined. Based on our laboratory evaluation, the bioenergetics model was furnishing unbiased estimates of food consumption by chinook salmon. Additionally, from the laboratory experiment, we calculated that chinook salmon retained 75% of the PCBs contained within their food. In an earlier study, assimilation rate of PCBs to chinook salmon from their food in Lake Michigan was estimated at 53%, thereby suggesting that the model was substantially overestimating food consumption by chinook salmon in Lake Michigan. However, we concluded that field performance of the model could not be accurately assessed because PCB assimilation efficiency is dependent on feeding rate, and feeding rate of chinook salmon was likely much lower in our laboratory tanks than in Lake Michigan.
NASA Astrophysics Data System (ADS)
Yano, S.; Kondo, H.; Tawara, Y.; Yamada, T.; Mori, K.; Yoshida, A.; Tada, K.; Tsujimura, M.; Tokunaga, T.
2017-12-01
It is important to understand groundwater systems, including their recharge, flow, storage, discharge, and withdrawal, so that we can use groundwater resources efficiently and sustainably. To examine groundwater recharge, several methods have been discussed based on water balance estimation, in situ experiments, and hydrological tracers. However, few studies have developed a concrete framework for quantifying groundwater recharge rates in an undefined area. In this study, we established a robust method to quantitatively determine water cycles and estimate the groundwater recharge rate by combining the advantages of field surveys and model simulations. We replicated in situ hydrogeological observations and three-dimensional modeling in a mountainous basin area in Japan. We adopted a general-purpose terrestrial fluid-flow simulator (GETFLOWS) to develop a geological model and simulate the local water cycle. Local data relating to topology, geology, vegetation, land use, climate, and water use were collected from the existing literature and observations to assess the spatiotemporal variations of the water balance from 2011 to 2013. The characteristic structures of geology and soils, as found through field surveys, were parameterized for incorporation into the model. The simulated results were validated using observed groundwater levels and resulted in a Nash-Sutcliffe Model Efficiency Coefficient of 0.92. The results suggested that local groundwater flows across the watershed boundary and that the groundwater recharge rate, defined as the flux of water reaching the local unconfined groundwater table, has values similar to the level estimated in the `the lower soil layers on a long-term basis. This innovative method enables us to quantify the groundwater recharge rate and its spatiotemporal variability with high accuracy, which contributes to establishing a foundation for sustainable groundwater management.
Inferring time-varying recharge from inverse analysis of long-term water levels
NASA Astrophysics Data System (ADS)
Dickinson, Jesse E.; Hanson, R. T.; Ferré, T. P. A.; Leake, S. A.
2004-07-01
Water levels in aquifers typically vary in response to time-varying rates of recharge, suggesting the possibility of inferring time-varying recharge rates on the basis of long-term water level records. Presumably, in the southwestern United States (Arizona, Nevada, New Mexico, southern California, and southern Utah), rates of mountain front recharge to alluvial aquifers depend on variations in precipitation rates due to known climate cycles such as the El Niño-Southern Oscillation index and the Pacific Decadal Oscillation. This investigation examined the inverse application of a one-dimensional analytical model for periodic flow described by Lloyd R. Townley in 1995 to estimate periodic recharge variations on the basis of variations in long-term water level records using southwest aquifers as the case study. Time-varying water level records at various locations along the flow line were obtained by simulation of forward models of synthetic basins with applied sinusoidal recharge of either a single period or composite of multiple periods of length similar to known climate cycles. Periodic water level components, reconstructed using singular spectrum analysis (SSA), were used to calibrate the analytical model to estimate each recharge component. The results demonstrated that periodic recharge estimates were most accurate in basins with nearly uniform transmissivity and the accuracy of the recharge estimates depends on monitoring well location. A case study of the San Pedro Basin, Arizona, is presented as an example of calibrating the analytical model to real data.
Ocean alkalinity and the Cretaceous/Tertiary boundary
NASA Technical Reports Server (NTRS)
Caldeira, K. G.; Rampino, Michael R.
1988-01-01
A biogeochemical cycle model resolving ocean carbon and alkalinity content is applied to the Maestrichtian and Danian. The model computes oceanic concentrations and distributions of Ca(2+), Mg(2+), and Sigma-CO2. From these values an atmospheric pCO2 value is calculated, which is used to estimate rates of terrestrial weathering of calcite, dolomite, and calcium and magnesium silicates. Metamorphism of carbonate rocks and the subsequent outgassing of CO2 to the atmosphere are parameterized in terms of carbonate rock reservoir sizes, total land area, and a measure of overall tectonic activity, the sea-floor generation rate. The ocean carbon reservoir computed by the model is used with Deep Sea Drilling Project (DSDP) C-13 data to estimate organic detrital fluxes under a variety of ocean mixing rate assumptions. Using Redfield ratios, the biogenic detrital flux estimate is used to partition the ocean carbon and alkalinity reservoirs between the mixed layer and deep ocean. The calcite flux estimate and carbonate ion concentrations are used to determine the rate of biologically mediated CaCO3 titration. Oceanic productivity was severely limited for approximately 500 kyr following the K/T boundary resulting in significant increases in total ocean alkalinity. As productivity returned to the ocean, excess carbon and alkalinity was removed from the ocean as CaCO3. Model runs indicate that this resulted in a transient imbalance in the other direction. Ocean chemistry returned to near-equilibrium by about 64 mybp.
Inferring time‐varying recharge from inverse analysis of long‐term water levels
Dickinson, Jesse; Hanson, R.T.; Ferré, T.P.A.; Leake, S.A.
2004-01-01
Water levels in aquifers typically vary in response to time‐varying rates of recharge, suggesting the possibility of inferring time‐varying recharge rates on the basis of long‐term water level records. Presumably, in the southwestern United States (Arizona, Nevada, New Mexico, southern California, and southern Utah), rates of mountain front recharge to alluvial aquifers depend on variations in precipitation rates due to known climate cycles such as the El Niño‐Southern Oscillation index and the Pacific Decadal Oscillation. This investigation examined the inverse application of a one‐dimensional analytical model for periodic flow described by Lloyd R. Townley in 1995 to estimate periodic recharge variations on the basis of variations in long‐term water level records using southwest aquifers as the case study. Time‐varying water level records at various locations along the flow line were obtained by simulation of forward models of synthetic basins with applied sinusoidal recharge of either a single period or composite of multiple periods of length similar to known climate cycles. Periodic water level components, reconstructed using singular spectrum analysis (SSA), were used to calibrate the analytical model to estimate each recharge component. The results demonstrated that periodic recharge estimates were most accurate in basins with nearly uniform transmissivity and the accuracy of the recharge estimates depends on monitoring well location. A case study of the San Pedro Basin, Arizona, is presented as an example of calibrating the analytical model to real data.
NASA Astrophysics Data System (ADS)
Barkley, Z.; Lauvaux, T.; Davis, K. J.; Deng, A.; Miles, N. L.; Richardson, S.; Martins, D. K.; Cao, Y.; Sweeney, C.; McKain, K.; Schwietzke, S.; Smith, M. L.; Kort, E. A.
2016-12-01
Leaks in natural gas infrastructure release CH4, a potent greenhouse gas, into the atmosphere. The estimated emission rate associated with the production and transportation of natural gas is uncertain, hindering our understanding of the energy's greenhouse footprint. This study presents two applications of inverse methodology for estimating regional emission rates from natural gas production and gathering facilities in northeastern Pennsylvania. First, we used the WRF-Chem mesoscale model at 3km resolution to simulate CH4 enhancements and compared them to observations obtained from a three-week flight campaign in May 2015 over the Marcellus shale region. Methane emission rates were adjusted to minimize the errors between aircraft observations and the model-simulated concentrations for each flight. Second, we present the first tower-based high resolution atmospheric inversion of CH4 emission rates from unconventional natural gas production activities. A year of continuous CH4 and calibrated δ13C isotope measurements were collected at four tower locations in northeastern Pennsylvania. The adjoint model used here combines a backward-in-time Lagrangian Particle Dispersion Model coupled with the WRF-Chem model at the same resolution. The prior for both optimization systems was compiled for major sources of CH4 within the Mid-Atlantic states, accounting for emissions from natural gas sources as well as emissions related to farming, waste management, coal, and other sources. Optimized natural gas emission rates are found to be 0.36% of total gas production, with a 2σ confidence interval between 0.27-0.45% of production. We present the results from the tower inversion over one year at 3km resolution providing additional information on spatial and temporal variability of emission rates from production and gathering facilities within the natural gas industry in comparison to flux estimates from the aircraft campaign.
NASA Astrophysics Data System (ADS)
Xu, L.; McDermitt, D. K.; Li, J.; Green, R. B.
2016-12-01
Methane plays a critical role in the radiation balance and chemistry of the atmosphere. Globally, landfill methane emission contributes about 10-19% of the anthropogenic methane burden into the atmosphere. In the United States, 18% of annual anthropogenic methane emissions come from landfills, which represent the third largest source of anthropogenic methane emissions, behind enteric fermentation and natural gas and oil production. One uncertainty in estimating landfill methane emissions is the fraction of methane oxidized when methane produced under anaerobic conditions passes through the cover soil. We developed a simple stoichiometric model to estimate the landfill methane oxidation fraction when the anaerobic CO2/CH4 production ratio is known. The model predicts a linear relationship between CO2 emission rates and CH4 emission rates, where the slope depends on anaerobic CO2/CH4 production ratio and the fraction of methane oxidized, and the intercept depends on non-methane-dependent oxidation processes. The model was tested with eddy covariance CO2 and CH4 emission rates at Bluff Road Landfill in Lincoln Nebraska. It predicted zero oxidation rate in the northern portion of this landfill where a membrane and vents were present. The zero oxidation rate was expected because there would be little opportunity for methane to encounter oxidizing conditions before leaving the vents. We also applied the model at the Turkey Run Landfill in Georgia to estimate the CH4 oxidation rate over a one year period. In contrast to Bluff Road Landfill, the Turkey Run Landfill did not have a membrane or vents. Instead, methane produced in the landfill had to diffuse through a 0.5 m soil cap before release to the atmosphere. We observed evidence for methane oxidation ranging from about 18% to above 60% depending upon the age of deposited waste material. The model will be briefly described, and results from the two contrasting landfills will be discussed in this presentation.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
Estimated harvesting on jellyfish in Sarawak
NASA Astrophysics Data System (ADS)
Bujang, Noriham; Hassan, Aimi Nuraida Ali
2017-04-01
There are three species of jellyfish recorded in Sarawak which are the Lobonema smithii (white jellyfish), Rhopilema esculenta (red jellyfish) and Mastigias papua. This study focused on two particular species which are L.smithii and R.esculenta. This study was done to estimate the highest carrying capacity and the population growth rate of both species by using logistic growth model. The maximum sustainable yield for the harvesting of this species was also determined. The unknown parameters in the logistic model were estimated using center finite different method. As for the results, it was found that the carrying capacity for L.smithii and R.esculenta were 4594.9246456819 tons and 5855.9894242086 tons respectively. Whereas, the population growth rate for both L.smithii and R.esculenta were estimated at 2.1800463754 and 1.144864086 respectively. Hence, the estimated maximum sustainable yield for harvesting for L.smithii and R.esculenta were 2504.2872047638 tons and 1676.0779949431 tons per year.
Caffrey, Emily A; Johansen, Mathew P; Higley, Kathryn A
2015-10-01
Radiological dosimetry for nonhuman biota typically relies on calculations that utilize the Monte Carlo simulations of simple, ellipsoidal geometries with internal radioactivity distributed homogeneously throughout. In this manner it is quick and easy to estimate whole-body dose rates to biota. Voxel models are detailed anatomical phantoms that were first used for calculating radiation dose to humans, which are now being extended to nonhuman biota dose calculations. However, if simple ellipsoidal models provide conservative dose-rate estimates, then the additional labor involved in creating voxel models may be unnecessary for most scenarios. Here we show that the ellipsoidal method provides conservative estimates of organ dose rates to small mammals. Organ dose rates were calculated for environmental source terms from Maralinga, the Nevada Test Site, Hanford and Fukushima using both the ellipsoidal and voxel techniques, and in all cases the ellipsoidal method yielded more conservative dose rates by factors of 1.2-1.4 for photons and 5.3 for beta particles. Dose rates for alpha-emitting radionuclides are identical for each method as full energy absorption in source tissue is assumed. The voxel procedure includes contributions to dose from organ-to-organ irradiation (shown here to comprise 2-50% of total dose from photons and 0-93% of total dose from beta particles) that is not specifically quantified in the ellipsoidal approach. Overall, the voxel models provide robust dosimetry for the nonhuman mammals considered in this study, and though the level of detail is likely extraneous to demonstrating regulatory compliance today, voxel models may nevertheless be advantageous in resolving ongoing questions regarding the effects of ionizing radiation on wildlife.
Multimedia Model for Polycyclic Aromatic Hydrocarbons (PAHs) and Nitro-PAHs in Lake Michigan
2015-01-01
Polycyclic aromatic hydrocarbon (PAH) contamination in the U.S. Great Lakes has long been of concern, but information regarding the current sources, distribution, and fate of PAH contamination is lacking, and very little information exists for the potentially more toxic nitro-derivatives of PAHs (NPAHs). This study uses fugacity, food web, and Monte Carlo models to examine 16 PAHs and five NPAHs in Lake Michigan, and to derive PAH and NPAH emission estimates. Good agreement was found between predicted and measured PAH concentrations in air, but concentrations in water and sediment were generally under-predicted, possibly due to incorrect parameter estimates for degradation rates, discharges to water, or inputs from tributaries. The food web model matched measurements of heavier PAHs (≥5 rings) in lake trout, but lighter PAHs (≤4 rings) were overpredicted, possibly due to overestimates of metabolic half-lives or gut/gill absorption efficiencies. Derived PAH emission rates peaked in the 1950s, and rates now approach those in the mid-19th century. The derived emission rates far exceed those in the source inventories, suggesting the need to reconcile differences and reduce uncertainties. Although additional measurements and physiochemical data are needed to reduce uncertainties and for validation purposes, the models illustrate the behavior of PAHs and NPAHs in Lake Michigan, and they provide useful and potentially diagnostic estimates of emission rates. PMID:25373871
A cloud model-radiative model combination for determining microwave TB-rain rate relations
NASA Technical Reports Server (NTRS)
Szejwach, Gerard; Adler, Robert F.; Jobard, Esabelle; Mack, Robert A.
1986-01-01
The development of a cloud model-radiative transfer model combination for computing average brightness temperature, T(B), is discussed. The cloud model and radiative transfer model used in this study are described. The relations between rain rate, cloud and rain water, cloud and precipitation ice, and upwelling radiance are investigated. The effects of the rain rate relations on T(B) under different climatological conditions are examined. The model-derived T(B) results are compared to the 92 and 183 GHz aircraft observations of Hakkarinen and Adler (1984, 1986) and the radar-estimated rain rate of Hakkarinen and Adler (1986); good correlation between the data is detected.
Nomographs for estimating surface fire behavior characteristics
Joe H. Scott
2007-01-01
A complete set of nomographs for estimating surface fire rate of spread and flame length for the original 13 and new 40 fire behavior fuel models is presented. The nomographs allow calculation of spread rate and flame length for wind in any direction with respect to slope and allow for nonheading spread directions. Basic instructions for use are included.
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
NASA Astrophysics Data System (ADS)
Jiang, Yan; Zemp, Roger
2018-01-01
The metabolic rate of oxygen consumption is an important metric of tissue oxygen metabolism and is especially critical in the brain, yet few methods are available for measuring it. We use a custom combined photoacoustic-microultrasound system and demonstrate cerebral oxygen consumption estimation in vivo. In particular, the cerebral metabolic rate of oxygen consumption was estimated in a murine model during variation of inhaled oxygen from hypoxia to hyperoxia. The hypothesis of brain autoregulation was confirmed with our method even though oxygen saturation and flow in vessels changed.
USDA-ARS?s Scientific Manuscript database
A theoretical model for the prediction of biomass concentration under real flue gas emission has been developed. The model considers the CO2 mass transfer rate, the critical SOx concentration and its role on pH based inter-conversion of bicarbonate in model building. The calibration and subsequent v...
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
NASA Astrophysics Data System (ADS)
Wang, James S.; Logan, Jennifer A.; McElroy, Michael B.; Duncan, Bryan N.; Megretskaia, Inna A.; Yantosca, Robert M.
2004-09-01
Methane has exhibited significant interannual variability with a slowdown in its growth rate beginning in the 1980s. We use a 3-D chemical transport model accounting for interannually varying emissions, transport, and sinks to analyze trends in CH4 from 1988 to 1997. Variations in CH4 sources were based on meteorological and country-level socioeconomic data. An inverse method was used to optimize the strengths of sources and sinks for a base year, 1994. We present a best-guess budget along with sensitivity tests. The analysis suggests that the sum of emissions from animals, fossil fuels, landfills, and wastewater estimated using Intergovernmental Panel on Climate Change default methodology is too high. Recent bottom-up estimates of the source from rice paddies appear to be too low. Previous top-down estimates of emissions from wetlands may be a factor of 2 higher than bottom-up estimates because of possible overestimates of OH. The model captures the general decrease in the CH4 growth rate observed from 1988 to 1997 and the anomalously low growth rates during 1992-1993. The slowdown in the growth rate is attributed to a combination of slower growth of sources and increases in OH. The economic downturn in the former Soviet Union and Eastern Europe made a significant contribution to the decrease in the growth rate of emissions. The 1992-1993 anomaly can be explained by fluctuations in wetland emissions and OH after the eruption of Mount Pinatubo. The results suggest that the recent slowdown of CH4 may be temporary.
Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter
NASA Astrophysics Data System (ADS)
Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao
2017-11-01
Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.
Updating the USGS seismic hazard maps for Alaska
Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.
2015-01-01
The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Gürtler, Ricardo E.; Cecere, María C.; Vázquez-Prokopec, Gonzalo M.; Ceballos, Leonardo A.; Gurevitz, Juan M.; Fernández, María del Pilar; Kitron, Uriel; Cohen, Joel E.
2014-01-01
Background The host species composition in a household and their relative availability affect the host-feeding choices of blood-sucking insects and parasite transmission risks. We investigated four hypotheses regarding factors that affect blood-feeding rates, proportion of human-fed bugs (human blood index), and daily human-feeding rates of Triatoma infestans, the main vector of Chagas disease. Methods A cross-sectional survey collected triatomines in human sleeping quarters (domiciles) of 49 of 270 rural houses in northwestern Argentina. We developed an improved way of estimating the human-feeding rate of domestic T. infestans populations. We fitted generalized linear mixed-effects models to a global model with six explanatory variables (chicken blood index, dog blood index, bug stage, numbers of human residents, bug abundance, and maximum temperature during the night preceding bug catch) and three response variables (daily blood-feeding rate, human blood index, and daily human-feeding rate). Coefficients were estimated via multimodel inference with model averaging. Findings Median blood-feeding intervals per late-stage bug were 4.1 days, with large variations among households. The main bloodmeal sources were humans (68%), chickens (22%), and dogs (9%). Blood-feeding rates decreased with increases in the chicken blood index. Both the human blood index and daily human-feeding rate decreased substantially with increasing proportions of chicken- or dog-fed bugs, or the presence of chickens indoors. Improved calculations estimated the mean daily human-feeding rate per late-stage bug at 0.231 (95% confidence interval, 0.157–0.305). Conclusions and Significance Based on the changing availability of chickens in domiciles during spring-summer and the much larger infectivity of dogs compared with humans, we infer that the net effects of chickens in the presence of transmission-competent hosts may be more adequately described by zoopotentiation than by zooprophylaxis. Domestic animals in domiciles profoundly affect the host-feeding choices, human-vector contact rates and parasite transmission predicted by a model based on these estimates. PMID:24852606