NASA Astrophysics Data System (ADS)
Meneghini, Robert
1998-09-01
A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
DOT National Transportation Integrated Search
2001-09-01
In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...
Estimating respiratory rate from FBG optical sensors by using signal quality measurement.
Yongwei Zhu; Maniyeri, Jayachandran; Fook, Victor Foo Siang; Haihong Zhang
2015-08-01
Non-intrusiveness is one of the advantages of in-bed optical sensor device for monitoring vital signs, including heart rate and respiratory rate. Estimating respiratory rate reliably using such sensors, however, is challenging, due to body movement, signal variation according to different subjects or body positions, etc. This paper presents a method for reliable respiratory rate estimation for FBG optical sensors by introducing signal quality estimation. The method estimates the quality of the signal waveform by detecting regularly repetitive patterns using proposed spectrum and cepstrum analysis. Multiple window sizes are used to cater for a wide range of target respiratory rates. Furthermore, the readings of multiple sensors are fused to derive a final respiratory rate. Experiments with 12 subjects and 2 body positions were conducted using polysomnography belt signal as groundtruth. The results demonstrated the effectiveness of the method.
Simplified Life-Cycle Cost Estimation
NASA Technical Reports Server (NTRS)
Remer, D. S.; Lorden, G.; Eisenberger, I.
1983-01-01
Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.
Meyer, Andreas L S; Wiens, John J
2018-01-01
Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Methods of adjusting the stable estimates of fertility for the effects of mortality decline.
Abou-Gamrah, H
1976-03-01
Summary The paper shows how stable population methods, based on the age structure and the rate of increase, may be used to estimate the demographic measures of a quasi-stable population. After a discussion of known methods for adjusting the stable estimates to allow for the effects of mortality decline two new methods are presented, the application of which requires less information. The first method does not need any supplementary information, and the second method requires an estimate of the difference between the last two five-year intercensal rates of increase, i.e. five times the annual change of the rate of increase during the last ten years. For these new methods we do not need to know the onset year of mortality decline as in the Coale-Demeny method, or a long series of rates of increase as in Zachariah's method.
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
Fourment, Mathieu; Holmes, Edward C
2014-07-24
Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called "uncorrelated relaxed clock" where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. We develop a maximum likelihood method--Physher--that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/.
Novel health monitoring method using an RGB camera.
Hassan, M A; Malik, A S; Fofi, D; Saad, N; Meriaudeau, F
2017-11-01
In this paper we present a novel health monitoring method by estimating the heart rate and respiratory rate using an RGB camera. The heart rate and the respiratory rate are estimated from the photoplethysmography (PPG) and the respiratory motion. The method mainly operates by using the green spectrum of the RGB camera to generate a multivariate PPG signal to perform multivariate de-noising on the video signal to extract the resultant PPG signal. A periodicity based voting scheme (PVS) was used to measure the heart rate and respiratory rate from the estimated PPG signal. We evaluated our proposed method with a state of the art heart rate measuring method for two scenarios using the MAHNOB-HCI database and a self collected naturalistic environment database. The methods were furthermore evaluated for various scenarios at naturalistic environments such as a motion variance session and a skin tone variance session. Our proposed method operated robustly during the experiments and outperformed the state of the art heart rate measuring methods by compensating the effects of the naturalistic environment.
Competing approaches to analysis of failure times with competing risks.
Farley, T M; Ali, M M; Slaymaker, E
2001-12-15
For the analysis of time to event data in contraceptive studies when individuals are subject to competing causes for discontinuation, some authors have recently advocated the use of the cumulative incidence rate as a more appropriate measure to summarize data than the complement of the Kaplan-Meier estimate of discontinuation. The former method estimates the rate of discontinuation in the presence of competing causes, while the latter is a hypothetical rate that would be observed if discontinuations for the other reasons could not occur. The difference between the two methods of analysis is the continuous time equivalent of a debate that took place in the contraceptive literature in the 1960s, when several authors advocated the use of net (adjusted or single decrement life table rates) rates in preference to crude rates (multiple decrement life table rates). A small simulation study illustrates the interpretation of the two types of estimate - the complement of the Kaplan-Meier estimate corresponds to a hypothetical rate where discontinuations for other reasons did not occur, while the cumulative incidence gives systematically lower estimates. The Kaplan-Meier estimates are more appropriate when estimating the effectiveness of a contraceptive method, but the cumulative incidence estimates are more appropriate when making programmatic decisions regarding contraceptive methods. Other areas of application, such as cancer studies, may prefer to use the cumulative incidence estimates, but their use should be determined according to the application. Copyright 2001 John Wiley & Sons, Ltd.
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
Harris, Keith M; Thandrayen, Joanne; Samphoas, Chien; Se, Pros; Lewchalermwongse, Boontriga; Ratanashevorn, Rattanakorn; Perry, Megan L; Britts, Choloe
2016-04-01
This study tested a low-cost method for estimating suicide rates in developing nations that lack adequate statistics. Data comprised reported suicides from Cambodia's 2 largest newspapers. Capture-recapture modeling estimated a suicide rate of 3.8/100 000 (95% CI = 2.5-6.7) for 2012. That compares to World Health Organization estimates of 1.3 to 9.4/100 000 and a Cambodian government estimate of 3.5/100 000. Suicide rates of males were twice that of females, and rates of those <40 years were twice that of those ≥40 years. Capture-recapture modeling with newspaper reports proved a reasonable method for estimating suicide rates for countries with inadequate official data. These methods are low-cost and can be applied to regions with at least 2 newspapers with overlapping reports. Means to further improve this approach are discussed. These methods are applicable to both recent and historical data, which can benefit epidemiological work, and may also be applicable to homicides and other statistics. © 2016 APJPH.
Prych, Edmund A.
1995-01-01
Long-term average deep-percolation rates of water from precipitation on the U.S. Department of Energy Hanford Site in semiarid south-central Washington, as estimated by a chloride mass-balance method, range from 0.008 to 0.30 mm/yr (millimeters per year) at nine locations covered by a variety of fine-grain soils and vegetated with sagebrush and other deep-rooted plants plus sparse shallow-rooted grasses. Deep-percolation rates estimated using a chlorine-36 bomb-pulse method at three of the nine locations range from 2.1 to 3.4 mm/yr. Because the mass-balance method may underestimate percolation rates and the bomb-pulse method probably overestimates percolation rates, estimates by the two methods probably bracket actual rates. These estimates, as well as estimates by previous investigators who used different methods, are a small fraction of mean annual precipitation, which ranges from about 160 to 210 mm/yr at the different test locations. Estimates by the mass-balance method at four locations in an area that is vegetated only with sparse shallow-rooted grasses range from 0.39 to 2.0 mm/yr. Chlorine-36 data at one location in this area were sufficient only to determine that the upper limit of deep percolation is more than 5.1 mm/yr. Although estimates for locations in this area are larger than the estimates for locations with deep-rooted plants, they are at the lower end of the range of estimates for this area made by previous investigators.
Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar
NASA Technical Reports Server (NTRS)
Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.
Oxygen transfer rate estimation in oxidation ditches from clean water measurements.
Abusam, A; Keesman, K J; Meinema, K; Van Straten, G
2001-06-01
Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).
ERIC Educational Resources Information Center
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Robinson-Cimpian, Joseph P.
2014-01-01
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Enjalbert, J; David, J L
2000-01-01
Using multilocus individual heterozygosity, a method is developed to estimate the outcrossing rates of a population over a few previous generations. Considering that individuals originate either from outcrossing or from n successive selfing generations from an outbred ancestor, a maximum-likelihood (ML) estimator is described that gives estimates of past outcrossing rates in terms of proportions of individuals with different n values. Heterozygosities at several unlinked codominant loci are used to assign n values to each individual. This method also allows a test of whether populations are in inbreeding equilibrium. The estimator's reliability was checked using simulations for different mating histories. We show that this ML estimator can provide estimates of outcrossing rates for the final generation outcrossing rate (t(0)) and a mean of the preceding rates (t(p)) and can detect major temporal variation in the mating system. The method is most efficient for low to intermediate outcrossing levels. Applied to nine populations of wheat, this method gave estimates of t(0) and t(p). These estimates confirmed the absence of outcrossing t(0) = 0 in the two populations subjected to manual selfing. For free-mating wheat populations, it detected lower final generation outcrossing rates t(0) = 0-0.06 than those expected from global heterozygosity t = 0.02-0.09. This estimator appears to be a new and efficient way to describe the multilocus heterozygosity of a population, complementary to Fis and progeny analysis approaches. PMID:11102388
Landsman, V; Lou, W Y W; Graubard, B I
2015-05-20
We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Cimpian, Joseph R.
2017-01-01
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Robust estimation of simulated urinary volume from camera images under bathroom illumination.
Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji
2016-08-01
General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
Low-Cost 3-D Flow Estimation of Blood With Clutter.
Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali
2017-05-01
Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.
Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V
2015-09-01
Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
Estimation of uncertainty in tracer gas measurement of air change rates.
Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio
2010-12-01
Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of <33%. Using this method, overestimation of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.
Hisano, Mizue; Connolly, Sean R; Robbins, William D
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.
Hisano, Mizue; Connolly, Sean R.; Robbins, William D.
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402
Least squares estimation of avian molt rates
Johnson, D.H.
1989-01-01
A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.
Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Queen, Steven Z.; VanEepoel, John M.; Sanner, Robert M.
2005-01-01
During the Hubble Robotic Servicing Mission, the Hubble Space Telescope (HST) attitude and rates are necessary to achieve the capture of HST by the Hubble Robotic Vehicle (HRV). The attitude and rates must be determined without the HST gyros or HST attitude estimates. The HRV will be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. Second, a linearized approach is developed. The linearized approach is based on more traditional Extended Kalman filter techniques. Simulation test results for both methods are given.
Jugnia, Louis-B; Sime-Ngando, Télesphore; Gilbert, Daniel
2006-10-01
The growth rate and losses of bacterioplankton in the epilimnion of an oligo-mesotrophic reservoir were simultaneously estimated using three different methods for each process. Bacterial production was determined by means of the tritiated thymidine incorporation method, the dialysis bag method and the dilution method, while bacterial mortality was assessed with the dilution method, the disappearance of thymidine-labeled natural cells and ingestion of fluorescent bacterial tracers by heterotrophic flagellates. The different methods used to estimate bacterial growth rates yielded similar results. On the other hand, the mortality rates obtained with the dilution method were significantly lower than those obtained with the use of thymidine-labeled natural cells. The bacterial ingestion rate by flagellates accounted on average for 39% of total bacterial mortality estimated by the dilution method, but this value fell to 5% when the total mortality was measured by the thymidine-labeling method. Bacterial abundance and production varied in opposite phase to flagellate abundance and the various bacterial mortality rates. All this points to the critical importance of methodological aspects in the elaboration of quantitative models of matter and energy flows over the time through microbial trophic networks in aquatic systems, and highlights the role of bacterioplankton as a source of carbon for higher trophic levels in the studied system.
Sakurai, Gen; Yonemura, Seiichiro; Kishimoto-Mo, Ayaka W.; Murayama, Shohei; Ohtsuka, Toshiyuki; Yokozawa, Masayuki
2015-01-01
Carbon dioxide (CO2) efflux from the soil surface, which is a major source of CO2 from terrestrial ecosystems, represents the total CO2 production at all soil depths. Although many studies have estimated the vertical profile of the CO2 production rate, one of the difficulties in estimating the vertical profile is measuring diffusion coefficients of CO2 at all soil depths in a nondestructive manner. In this study, we estimated the temporal variation in the vertical profile of the CO2 production rate using a data assimilation method, the particle filtering method, in which the diffusion coefficients of CO2 were simultaneously estimated. The CO2 concentrations at several soil depths and CO2 efflux from the soil surface (only during the snow-free period) were measured at two points in a broadleaf forest in Japan, and the data were assimilated into a simple model including a diffusion equation. We found that there were large variations in the pattern of the vertical profile of the CO2 production rate between experiment sites: the peak CO2 production rate was at soil depths around 10 cm during the snow-free period at one site, but the peak was at the soil surface at the other site. Using this method to estimate the CO2 production rate during snow-cover periods allowed us to estimate CO2 efflux during that period as well. We estimated that the CO2 efflux during the snow-cover period (about half the year) accounted for around 13% of the annual CO2 efflux at this site. Although the method proposed in this study does not ensure the validity of the estimated diffusion coefficients and CO2 production rates, the method enables us to more closely approach the “actual” values by decreasing the variance of the posterior distribution of the values. PMID:25793387
Prych, Edmund A.
1998-01-01
A chloride mass-balance method and a chlorine-36 isotope bomb-pulse method were used to estimate long-term average rates of deep percolation at at the U.S. Department of Energy Hanford Site. Because the bomb-pulse method typically gives an upper limit and the mass-balance method may underestimate, estimates from both methods probably bracket actual rates.
[Estimation of survival rates: technics used (author's transl)].
Rodary, C; Laplanche, A; Comnougue, C; Flamant, R
1979-01-01
The direct method and life-table methods (actuarial and Kaplan-Meier) for estimating survival rates are described here. The difference between direct method and lifetable method is the use of information about the patients who are still alive. Practical examples of calculation are given with recommandations for graphical displays.
Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Queen, Steven Z.; VanEepoel, John M.; Sanner, Robert M.
2005-01-01
In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a non-linear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.
A computational method for estimating the PCR duplication rate in DNA and RNA-seq experiments.
Bansal, Vikas
2017-03-14
PCR amplification is an important step in the preparation of DNA sequencing libraries prior to high-throughput sequencing. PCR amplification introduces redundant reads in the sequence data and estimating the PCR duplication rate is important to assess the frequency of such reads. Existing computational methods do not distinguish PCR duplicates from "natural" read duplicates that represent independent DNA fragments and therefore, over-estimate the PCR duplication rate for DNA-seq and RNA-seq experiments. In this paper, we present a computational method to estimate the average PCR duplication rate of high-throughput sequence datasets that accounts for natural read duplicates by leveraging heterozygous variants in an individual genome. Analysis of simulated data and exome sequence data from the 1000 Genomes project demonstrated that our method can accurately estimate the PCR duplication rate on paired-end as well as single-end read datasets which contain a high proportion of natural read duplicates. Further, analysis of exome datasets prepared using the Nextera library preparation method indicated that 45-50% of read duplicates correspond to natural read duplicates likely due to fragmentation bias. Finally, analysis of RNA-seq datasets from individuals in the 1000 Genomes project demonstrated that 70-95% of read duplicates observed in such datasets correspond to natural duplicates sampled from genes with high expression and identified outlier samples with a 2-fold greater PCR duplication rate than other samples. The method described here is a useful tool for estimating the PCR duplication rate of high-throughput sequence datasets and for assessing the fraction of read duplicates that correspond to natural read duplicates. An implementation of the method is available at https://github.com/vibansal/PCRduplicates .
Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2005-01-01
In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. The development includes an analysis of the estimator stability given errors in the measured attitude. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given, including scenarios with erroneous measured attitudes. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.
Detection limit for rate fluctuations in inhomogeneous Poisson processes
NASA Astrophysics Data System (ADS)
Shintani, Toshiaki; Shinomoto, Shigeru
2012-04-01
Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.
Detection limit for rate fluctuations in inhomogeneous Poisson processes.
Shintani, Toshiaki; Shinomoto, Shigeru
2012-04-01
Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Program CONTRAST--A general program for the analysis of several survival or recovery rate estimates
Hines, J.E.; Sauer, J.R.
1989-01-01
This manual describes the use of program CONTRAST, which implements a generalized procedure for the comparison of several rate estimates. This method can be used to test both simple and composite hypotheses about rate estimates, and we discuss its application to multiple comparisons of survival rate estimates. Several examples of the use of program CONTRAST are presented. Program CONTRAST will run on IBM-cimpatible computers, and requires estimates of the rates to be tested, along with associated variance and covariance estimates.
Respiratory rate estimation from the built-in cameras of smartphones and tablets.
Nam, Yunyoung; Lee, Jinseok; Chon, Ki H
2014-04-01
This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.
Different methods of hilar clamping during partial nephrectomy: Impact on renal function.
Lee, Jeong Woo; Kim, Hwanik; Choo, Minsoo; Park, Yong Hyun; Ku, Ja Hyeon; Kim, Hyeon Hoe; Kwak, Cheol
2014-03-01
To evaluate the impact of different hilar clamping methods on changes in renal function after partial nephrectomy. We analyzed the clinical data of 369 patients who underwent partial nephrectomy for a single renal tumor of size ≤4.0 cm and a normal contralateral kidney. Patients were separated into three groups depending on hilar clamping method: non-clamping, cold ischemia and warm ischemia. Estimated glomerular filtration rate was examined at preoperative, nadir and 1 year postoperatively. Percent change in estimated glomerular filtration rate was used as the parameter to assess the renal functional outcome. Percent change in nadir estimated glomerular filtration rate in the non-clamping group was significantly less compared with the cold ischemia and warm ischemia groups (P < 0.001). However, no significant differences among the groups were noted in percent change of estimated glomerular filtration rate at 1 year (P = 0.348). The cold ischemia group had a similar serial change of postoperative renal function compared with the warm ischemia group. Percent change in 1-year estimated glomerular filtration rate increased with increasing ischemia time in the cold ischemia (P for trend = 0.073) and warm ischemia groups (P for trend = 0.010). On multivariate analysis, hilar clamping (both warm ischemia and cold ischemia) were significantly associated with percent change in nadir estimated glomerular filtration rate, but not in 1-year estimated glomerular filtration rate. Non-clamping partial nephrectomy results in a lower percent change in nadir estimated glomerular filtration rate, whereas it carries an estimated glomerular filtration rate change at 1 year that is similar to partial nephrectomy with cold ischemia and warm ischemia. Cold ischemia and warm ischemia provide a similar effect on renal function. Therefore, when hilar clamping is required, minimization of ischemia time is necessary. © 2013 The Japanese Urological Association.
Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-01-01
Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805
Sampling studies to estimate the HIV prevalence rate in female commercial sex workers.
Pascom, Ana Roberta Pati; Szwarcwald, Célia Landmann; Barbosa Júnior, Aristides
2010-01-01
We investigated sampling methods being used to estimate the HIV prevalence rate among female commercial sex workers. The studies were classified according to the adequacy or not of the sample size to estimate HIV prevalence rate and according to the sampling method (probabilistic or convenience). We identified 75 studies that estimated the HIV prevalence rate among female sex workers. Most of the studies employed convenience samples. The sample size was not adequate to estimate HIV prevalence rate in 35 studies. The use of convenience sample limits statistical inference for the whole group. It was observed that there was an increase in the number of published studies since 2005, as well as in the number of studies that used probabilistic samples. This represents a large advance in the monitoring of risk behavior practices and HIV prevalence rate in this group.
NASA Astrophysics Data System (ADS)
Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi
2015-02-01
Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
The relationship between species detection probability and local extinction probability
Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.
2004-01-01
In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries
Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B
2011-01-01
Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-14
..., or Partially-Exclusive Licensing of an Invention Concerning Method for Estimating Core Body... Serial No. 61/572,677, entitled ``Method for Estimating Core Body Temperature from Heart Rate,'' filed on... core temperature from heart rate. The invention further relates to a method of determining impending...
Demographic estimation methods for plants with unobservable life-states
Kery, M.; Gregg, K.B.; Schaub, M.
2005-01-01
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous computer algebra methods to identify the parameters that are estimable in principle. As life-states are a prominent feature in plant life cycles, multi state capture-recapture models are a natural framework for analysing population dynamics of plants with dormancy.
Nunes, Sheila Elke Araujo; Minamisava, Ruth; Vieira, Maria Aparecida da Silva; Itria, Alexander; Pessoa, Vicente Porfirio; de Andrade, Ana Lúcia Sampaio Sgambatti; Toscano, Cristiana Maria
2017-01-01
ABSTRACT Objective To determine and compare hospitalization costs of bacterial community-acquired pneumonia cases via different costing methods under the Brazilian Public Unified Health System perspective. Methods Cost-of-illness study based on primary data collected from a sample of 59 children aged between 28 days and 35 months and hospitalized due to bacterial pneumonia. Direct medical and non-medical costs were considered and three costing methods employed: micro-costing based on medical record review, micro-costing based on therapeutic guidelines and gross-costing based on the Brazilian Public Unified Health System reimbursement rates. Costs estimates obtained via different methods were compared using the Friedman test. Results Cost estimates of inpatient cases of severe pneumonia amounted to R$ 780,70/$Int. 858.7 (medical record review), R$ 641,90/$Int. 706.90 (therapeutic guidelines) and R$ 594,80/$Int. 654.28 (Brazilian Public Unified Health System reimbursement rates). Costs estimated via micro-costing (medical record review or therapeutic guidelines) did not differ significantly (p=0.405), while estimates based on reimbursement rates were significantly lower compared to estimates based on therapeutic guidelines (p<0.001) or record review (p=0.006). Conclusion Brazilian Public Unified Health System costs estimated via different costing methods differ significantly, with gross-costing yielding lower cost estimates. Given costs estimated by different micro-costing methods are similar and costing methods based on therapeutic guidelines are easier to apply and less expensive, this method may be a valuable alternative for estimation of hospitalization costs of bacterial community-acquired pneumonia in children. PMID:28767921
Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-09-01
Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.
Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina
2017-07-01
A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Mahmood, Iftekhar
2004-01-01
The objective of this study was to evaluate the performance of Wagner-Nelson, Loo-Reigelman, and statistical moments methods in determining the absorption rate constant(s) in the presence of a secondary peak. These methods were also evaluated when there were two absorption rates without a secondary peak. Different sets of plasma concentration versus time data for a hypothetical drug following one or two compartment models were generated by simulation. The true ka was compared with the ka estimated by Wagner-Nelson, Loo-Riegelman and statistical moments methods. The results of this study indicate that Wagner-Nelson, Loo-Riegelman and statistical moments methods may not be used for the estimation of absorption rate constants in the presence of a secondary peak or when absorption takes place with two absorption rates.
Estimation of time-varying growth, uptake and excretion rates from dynamic metabolomics data.
Cinquemani, Eugenio; Laroute, Valérie; Cocaign-Bousquet, Muriel; de Jong, Hidde; Ropers, Delphine
2017-07-15
Technological advances in metabolomics have made it possible to monitor the concentration of extracellular metabolites over time. From these data, it is possible to compute the rates of uptake and excretion of the metabolites by a growing cell population, providing precious information on the functioning of intracellular metabolism. The computation of the rate of these exchange reactions, however, is difficult to achieve in practice for a number of reasons, notably noisy measurements, correlations between the concentration profiles of the different extracellular metabolites, and discontinuties in the profiles due to sudden changes in metabolic regime. We present a method for precisely estimating time-varying uptake and excretion rates from time-series measurements of extracellular metabolite concentrations, specifically addressing all of the above issues. The estimation problem is formulated in a regularized Bayesian framework and solved by a combination of extended Kalman filtering and smoothing. The method is shown to improve upon methods based on spline smoothing of the data. Moreover, when applied to two actual datasets, the method recovers known features of overflow metabolism in Escherichia coli and Lactococcus lactis , and provides evidence for acetate uptake by L. lactis after glucose exhaustion. The results raise interesting perspectives for further work on rate estimation from measurements of intracellular metabolites. The Matlab code for the estimation method is available for download at https://team.inria.fr/ibis/rate-estimation-software/ , together with the datasets. eugenio.cinquemani@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
Heart rate estimation from FBG sensors using cepstrum analysis and sensor fusion.
Zhu, Yongwei; Fook, Victor Foo Siang; Jianzhong, Emily Hao; Maniyeri, Jayachandran; Guan, Cuntai; Zhang, Haihong; Jiliang, Eugene Phua; Biswas, Jit
2014-01-01
This paper presents a method of estimating heart rate from arrays of fiber Bragg grating (FBG) sensors embedded in a mat. A cepstral domain signal analysis technique is proposed to characterize Ballistocardiogram (BCG) signals. With this technique, the average heart beat intervals can be estimated by detecting the dominant peaks in the cepstrum, and the signals of multiple sensors can be fused together to obtain higher signal to noise ratio than each individual sensor. Experiments were conducted with 10 human subjects lying on 2 different postures on a bed. The estimated heart rate from BCG was compared with heart rate ground truth from ECG, and the mean error of estimation obtained is below 1 beat per minute (BPM). The results show that the proposed fusion method can achieve promising heart rate measurement accuracy and robustness against various sensor contact conditions.
Zhonggang, Liang; Hong, Yan
2006-10-01
A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.
Duryea, Elaine L; McIntire, Donald D; Leveno, Kenneth J
2015-08-01
The objective of the study was to examine the rate of preterm birth in the United States using 2 different methods of gestational age assignment and determine which method more closely correlates with the known morbidities associated with preterm birth. Using National Center for Health Statistics data from 2012 United States birth certificates, we computed the rate of preterm birth defined as a birth at 36 or fewer completed weeks with gestational age assigned using the obstetric estimate as specified in the revised birth certificate. This rate was then compared with the rate when gestational age is calculated using the last menstrual period alone. The rates of neonatal morbidities associated with preterm birth were examined for each method of assigning gestational age. The rate of preterm birth was 9.7% when the obstetric estimate is used to calculate gestational age, which is significantly different from the rate of 11.5% when gestational age is calculated using the last menstrual period alone. In addition, the neonates identified as preterm by obstetric estimate were more likely to qualify as low birthweight (54% vs 42%; P < .001) and suffer morbidities such as need for assisted ventilation and surfactant use than those identified with the last menstrual period alone. That is to say obstetric estimate is more sensitive and specific for preterm birth by all available markers of prematurity. The preterm birth rate is 9.7% vs 11.5% and more closely correlates with adverse neonatal outcomes associated with preterm birth when gestational age is assigned using the obstetric estimate. This method of gestational age assignment is currently used by most industrialized nations and should be considered for future reporting of US outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.
Chong, K C; Fong, H F; Zee, C Y
2014-05-01
During the surveillance of influenza pandemics, underreported data are a public health challenge that complicates the understanding of pandemic threats and can undermine mitigation efforts. We propose a method to estimate incidence reporting rates at early stages of new influenza pandemics using 2009 pandemic H1N1 as an example. Routine surveillance data and statistics of travellers arriving from Mexico were used. Our method incorporates changes in reporting rates such as linearly increasing trends due to the enhanced surveillance. From our results, the reporting rate was estimated at 0·46% during early stages of the pandemic in Mexico. We estimated cumulative incidence in the Mexican population to be 0·7% compared to 0·003% reported by officials in Mexico at the end of April. This method could be useful in estimation of actual cases during new influenza pandemics for policy makers to better determine appropriate control measures.
Dorazio, R.M.; Rago, P.J.
1991-01-01
We simulated mark–recapture experiments to evaluate a method for estimating fishing mortality and migration rates of populations stratified at release and recovery. When fish released in two or more strata were recovered from different recapture strata in nearly the same proportions, conditional recapture probabilities were estimated outside the [0, 1] interval. The maximum likelihood estimates tended to be biased and imprecise when the patterns of recaptures produced extremely "flat" likelihood surfaces. Absence of bias was not guaranteed, however, in experiments where recapture rates could be estimated within the [0, 1] interval. Inadequate numbers of tag releases and recoveries also produced biased estimates, although the bias was easily detected by the high sampling variability of the estimates. A stratified tag–recapture experiment with sockeye salmon (Oncorhynchus nerka) was used to demonstrate procedures for analyzing data that produce biased estimates of recapture probabilities. An estimator was derived to examine the sensitivity of recapture rate estimates to assumed differences in natural and tagging mortality, tag loss, and incomplete reporting of tag recoveries.
Lincoln estimates of mallard (Anas platyrhynchos) abundance in North America.
Alisauskas, Ray T; Arnold, Todd W; Leafloor, James O; Otis, David L; Sedinger, James S
2014-01-01
Estimates of range-wide abundance, harvest, and harvest rate are fundamental for sound inferences about the role of exploitation in the dynamics of free-ranging wildlife populations, but reliability of existing survey methods for abundance estimation is rarely assessed using alternative approaches. North American mallard populations have been surveyed each spring since 1955 using internationally coordinated aerial surveys, but population size can also be estimated with Lincoln's method using banding and harvest data. We estimated late summer population size of adult and juvenile male and female mallards in western, midcontinent, and eastern North America using Lincoln's method of dividing (i) total estimated harvest, [Formula: see text], by estimated harvest rate, [Formula: see text], calculated as (ii) direct band recovery rate, [Formula: see text], divided by the (iii) band reporting rate, [Formula: see text]. Our goal was to compare estimates based on Lincoln's method with traditional estimates based on aerial surveys. Lincoln estimates of adult males and females alive in the period June-September were 4.0 (range: 2.5-5.9), 1.8 (range: 0.6-3.0), and 1.8 (range: 1.3-2.7) times larger than respective aerial survey estimates for the western, midcontinent, and eastern mallard populations, and the two population estimates were only modestly correlated with each other (western: r = 0.70, 1993-2011; midcontinent: r = 0.54, 1961-2011; eastern: r = 0.50, 1993-2011). Higher Lincoln estimates are predictable given that the geographic scope of inference from Lincoln estimates is the entire population range, whereas sampling frames for aerial surveys are incomplete. Although each estimation method has a number of important potential biases, our review suggests that underestimation of total population size by aerial surveys is the most likely explanation. In addition to providing measures of total abundance, Lincoln's method provides estimates of fecundity and population sex ratio and could be used in integrated population models to provide greater insights about population dynamics and management of North American mallards and most other harvested species.
Estimating the effect of a rare time-dependent treatment on the recurrent event rate.
Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E
2018-05-30
In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
King, Adam C.; Raiber, Matthias; Cox, Malcolm E.; Cendón, Dioni I.
2017-09-01
This study demonstrates the importance of the conceptual hydrogeological model for the estimation of groundwater recharge rates in an alluvial system interconnected with an ephemeral or intermittent stream in south-east Queensland, Australia. The losing/gaining condition of these streams is typically subject to temporal and spatial variability, and knowledge of these hydrological processes is critical for the interpretation of recharge estimates. Recharge rate estimates of 76-182 mm/year were determined using the water budget method. The water budget method provides useful broad approximations of recharge and discharge fluxes. The chloride mass balance (CMB) method and the tritium method were used on 17 and 13 sites respectively, yielding recharge rates of 1-43 mm/year (CMB) and 4-553 mm/year (tritium method). However, the conceptual hydrogeological model confirms that the results from the CMB method at some sites are not applicable in this setting because of overland flow and channel leakage. The tritium method was appropriate here and could be applied to other alluvial systems, provided that channel leakage and diffuse infiltration of rainfall can be accurately estimated. The water-table fluctuation (WTF) method was also applied to data from 16 bores; recharge estimates ranged from 0 to 721 mm/year. The WTF method was not suitable where bank storage processes occurred.
Lidar method to estimate emission rates from extended sources
USDA-ARS?s Scientific Manuscript database
Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...
Statistical methods for thermonuclear reaction rates and nucleosynthesis simulations
NASA Astrophysics Data System (ADS)
Iliadis, Christian; Longland, Richard; Coc, Alain; Timmes, F. X.; Champagne, Art E.
2015-03-01
Rigorous statistical methods for estimating thermonuclear reaction rates and nucleosynthesis are becoming increasingly established in nuclear astrophysics. The main challenge being faced is that experimental reaction rates are highly complex quantities derived from a multitude of different measured nuclear parameters (e.g., astrophysical S-factors, resonance energies and strengths, particle and γ-ray partial widths). We discuss the application of the Monte Carlo method to two distinct, but related, questions. First, given a set of measured nuclear parameters, how can one best estimate the resulting thermonuclear reaction rates and associated uncertainties? Second, given a set of appropriate reaction rates, how can one best estimate the abundances from nucleosynthesis (i.e., reaction network) calculations? The techniques described here provide probability density functions that can be used to derive statistically meaningful reaction rates and final abundances for any desired coverage probability. Examples are given for applications to s-process neutron sources, core-collapse supernovae, classical novae, and Big Bang nucleosynthesis.
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
NASA Astrophysics Data System (ADS)
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Kawado, Miyuki; Hashimoto, Shuji; Murakami, Yoshitaka; Izumida, Michiko; Ohta, Akiko; Tada, Yuki; Shigematsu, Mika; Yasui, Yoshinori; Taniguchi, Kiyosu; Nagai, Masaki
2008-01-01
BACKGROUND The method for estimating incidence of infectious diseases from sentinel surveillance data has been proposed. In Japan, although the annual incidence rates of influenza and pediatric diseases estimated using the method were reported, their weekly incidence rates have not. METHODS The weekly sex- and age-specific numbers of cases in the sentinel medical institutions in the National Epidemiological Surveillance of Infectious Diseases in Japan in 2002-2005 were used. Annual and weekly incidence rates of influenza and 12 pediatric diseases were estimated by the above-mentioned method, under the assumption that sentinels are randomly selected from all medical institutions. RESULTS The annual incidence rate of influenza in 2002-2005 was 57.7-142.6 per 1,000 population. The highest weekly incidence rate was 7.4 at week 8 in 2002, 14.9 at week 4 in 2003, 14.1 at week 5 in 2004, and 21.2 at week 9 in 2005. The annual incidence rate per 1,000 population of 0-14 years old in 2002-2005 was less than 5.0 for pertussis, rubella and measles, 293.2-320.8 for infectious gastroenteritis, and 5.3-89.6 for 8 other diseases. The highest weekly incidence rate was less than 1.0 for exanthem subitum, and was more than 5.0 for infectious gastroenteritis, hand-foot-mouth disease and herpangina. CONCLUSION We estimated annual and weekly incidence rates of influenza and pediatric diseases in Japan in 2002-2005, and described their temporal variation. PMID:18239340
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Determining Mutation Rates in Bacterial Populations
Rosche, William A.; Foster, Patricia L.
2010-01-01
When properly determined, spontaneous mutation rates are a more accurate and biologically meaningful reflection of the underlying mutagenic mechanism than are mutation frequencies. Because bacteria grow exponentially and mutations arise stochastically, methods to estimate mutation rates depend on theoretical models that describe the distribution of mutant numbers among parallel cultures, as in the original Luria-Delbrück fluctuation analysis. An accurate determination of mutation rate depends on understanding the strengths and limitations of these methods, and how to design fluctuation assays to optimize a given method. In this paper we describe a number of methods to estimate mutation rates, give brief accounts of their derivations, and discuss how they behave under various experimental conditions. PMID:10610800
Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series
NASA Astrophysics Data System (ADS)
Li, M.; Kump, L. R.; Hinnov, L.
2017-12-01
This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015
Cross, Alan; Collard, Mark; Nelson, Andrew
2008-01-01
The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached. PMID:18560580
Cross, Alan; Collard, Mark; Nelson, Andrew
2008-06-18
The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.
Bekessy, A.; Molineaux, L.; Storey, J.
1976-01-01
A method is described of estimating the malaria incidence rate ĥ and the recovery rate r from longitudinal data. The method is based on the assumption that the phenomenon of patent parasitaemia can be represented by a reversible two-state catalytic model; it is applicable to all problems that can be represented by such a model. The method was applied to data on falciparum malaria from the West African savanna and the findings suggested that immunity increases the rate of recovery from patent parasitaemia by a factor of up to 10, and also reduces the number of episodes of patent parasitaemia resulting from one inoculation. Under the effect of propoxur, ĥ varies with the estimated man-biting rate of the vector while r̂ increases, possibly owing to reduced super-infection. PMID:800968
A semiparametric separation curve approach for comparing correlated ROC data from multiple markers
Tang, Liansheng Larry; Zhou, Xiao-Hua
2012-01-01
In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360
Hone, J.; Pech, R.; Yip, P.
1992-01-01
Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476
Sundaram, Aparna; Vaughan, Barbara; Kost, Kathryn; Bankole, Akinrinola; Finer, Lawrence; Singh, Susheela; Trussell, James
2017-03-01
Contraceptive failure rates measure a woman's probability of becoming pregnant while using a contraceptive. Information about these rates enables couples to make informed contraceptive choices. Failure rates were last estimated for 2002, and social and economic changes that have occurred since then necessitate a reestimation. To estimate failure rates for the most commonly used reversible methods in the United States, data from the 2006-2010 National Survey of Family Growth were used; some 15,728 contraceptive use intervals, contributed by 6,683 women, were analyzed. Data from the Guttmacher Institute's 2008 Abortion Patient Survey were used to adjust for abortion underreporting. Kaplan-Meier methods were used to estimate the associated single-decrement probability of failure by duration of use. Failure rates were compared with those from 1995 and 2002. Long-acting reversible contraceptives (the IUD and the implant) had the lowest failure rates of all methods (1%), while condoms and withdrawal carried the highest probabilities of failure (13% and 20%, respectively). However, the failure rate for the condom had declined significantly since 1995 (from 18%), as had the failure rate for all hormonal methods combined (from 8% to 6%). The failure rate for all reversible methods combined declined from 12% in 2002 to 10% in 2006-2010. These broad-based declines in failure rates reverse a long-term pattern of minimal change. Future research should explore what lies behind these trends, as well as possibilities for further improvements. © 2017 The Authors. Perspectives on Sexual and Reproductive Health published by Wiley Periodicals, Inc., on behalf of the Guttmacher Institute.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
Quadratic semiparametric Von Mises calculus
Robins, James; Li, Lingling; Tchetgen, Eric
2009-01-01
We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487
NASA Astrophysics Data System (ADS)
Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong
2018-02-01
Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.
Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...
The problem of estimating recent genetic connectivity in a changing world.
Samarasin, Pasan; Shuter, Brian J; Wright, Stephen I; Rodd, F Helen
2017-02-01
Accurate understanding of population connectivity is important to conservation because dispersal can play an important role in population dynamics, microevolution, and assessments of extirpation risk and population rescue. Genetic methods are increasingly used to infer population connectivity because advances in technology have made them more advantageous (e.g., cost effective) relative to ecological methods. Given the reductions in wildlife population connectivity since the Industrial Revolution and more recent drastic reductions from habitat loss, it is important to know the accuracy of and biases in genetic connectivity estimators when connectivity has declined recently. Using simulated data, we investigated the accuracy and bias of 2 common estimators of migration (movement of individuals among populations) rate. We focused on the timing of the connectivity change and the magnitude of that change on the estimates of migration by using a coalescent-based method (Migrate-n) and a disequilibrium-based method (BayesAss). Contrary to expectations, when historically high connectivity had declined recently: (i) both methods over-estimated recent migration rates; (ii) the coalescent-based method (Migrate-n) provided better estimates of recent migration rate than the disequilibrium-based method (BayesAss); (iii) the coalescent-based method did not accurately reflect long-term genetic connectivity. Overall, our results highlight the problems with comparing coalescent and disequilibrium estimates to make inferences about the effects of recent landscape change on genetic connectivity among populations. We found that contrasting these 2 estimates to make inferences about genetic-connectivity changes over time could lead to inaccurate conclusions. © 2016 Society for Conservation Biology.
Blaizot, Stéphanie; Kim, Andrea A; Zeh, Clement; Riche, Benjamin; Maman, David; De Cock, Kevin M; Etard, Jean-François; Ecochard, René
2017-05-01
Estimating HIV incidence is critical for identifying groups at risk for HIV infection, planning and targeting interventions, and evaluating these interventions over time. The use of reliable estimation methods for HIV incidence is thus of high importance. The aim of this study was to compare methods for estimating HIV incidence in a population-based cross-sectional survey. The incidence estimation methods evaluated included assay-derived methods, a testing history-derived method, and a probability-based method applied to data from the Ndhiwa HIV Impact in Population Survey (NHIPS). Incidence rates by sex and age and cumulative incidence as a function of age were presented. HIV incidence ranged from 1.38 [95% confidence interval (CI) 0.67-2.09] to 3.30 [95% CI 2.78-3.82] per 100 person-years overall; 0.59 [95% CI 0.00-1.34] to 2.89 [95% CI 0.86-6.45] in men; and 1.62 [95% CI 0.16-6.04] to 4.03 [95% CI 3.30-4.77] per 100 person-years in women. Women had higher incidence rates than men for all methods. Incidence rates were highest among women aged 15-24 and 25-34 years and highest among men aged 25-34 years. Comparison of different methods showed variations in incidence estimates, but they were in agreement to identify most-at-risk groups. The use and comparison of several distinct approaches for estimating incidence are important to provide the best-supported estimate of HIV incidence in the population.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Luo, Lola; Small, Dylan; Stewart, Walter F.; Roy, Jason A.
2013-01-01
Chronic diseases are often described by stages of severity. Clinical decisions about what to do are influenced by the stage, whether a patient is progressing, and the rate of progression. For chronic kidney disease (CKD), relatively little is known about the transition rates between stages. To address this, we used electronic health records (EHR) data on a large primary care population, which should have the advantage of having both sufficient follow-up time and sample size to reliably estimate transition rates for CKD. However, EHR data have some features that threaten the validity of any analysis. In particular, the timing and frequency of laboratory values and clinical measurements are not determined a priori by research investigators, but rather, depend on many factors, including the current health of the patient. We developed an approach for estimating CKD stage transition rates using hidden Markov models (HMMs), when the level of information and observation time vary among individuals. To estimate the HMMs in a computationally manageable way, we used a “discretization” method to transform daily data into intervals of 30 days, 90 days, or 180 days. We assessed the accuracy and computation time of this method via simulation studies. We also used simulations to study the effect of informative observation times on the estimated transition rates. Our simulation results showed good performance of the method, even when missing data are non-ignorable. We applied the methods to EHR data from over 60,000 primary care patients who have chronic kidney disease (stage 2 and above). We estimated transition rates between six underlying disease states. The results were similar for men and women. PMID:25848580
Method for Estimating Annual Atrazine Use for Counties in the Conterminous United States, 1992-2007
Thelin, Gail P.; Stone, Wesley W.
2010-01-01
A method was developed to estimate annual atrazine use during 1992 to 2007 on sixteen crops and four agricultural land uses. For each year, atrazine use was estimated for all counties in the conterminous United States (except California) by combining (1) proprietary data from the Doane Marketing Research-Kynetec (DMRK) AgroTrak database on the mass of atrazine applied to agricultural crops, (2) county harvested crop acreage, by county, from the 1992, 1997, 2002, and 2007 Censuses of Agriculture, and (3) annual harvested crop acreage from National Agriculture Statistics Service (NASS) for non-Census years. DMRK estimates of pesticide use on individual crops were derived from surveys of major field crops and selected specialty crops in multicounty areas referred to as Crop Reporting Districts (CRD). The CRD-level atrazine-use estimates were disaggregated to obtain county-level application rates by dividing the mass (pounds) of pesticides applied to a crop by the acreage of that crop in the CRD to yield a rate per harvested acre. When atrazine-use estimates were not available for a CRD, crop, or year, an estimated rate was developed following a hierarchy of decision rules that checked first for the availability of a crop application rate from surveyed atrazine application rate(s) for adjacent CRDs for a specific year, and second, the rates from surveyed CRDs within for U.S. Department of Agriculture Farm Production Regions for a specific year or multiple years. The estimation method applied linear interpolation to estimate crop acreage for years when harvested acres for a crop and county were not reported in either the Census of Agriculture or the NASS database, but were reported by these data sources for other years for that crop and county. Data for atrazine use for the counties in California was obtained from farmers' reports of pesticide use collected and published by the California Department of Pesticide Regulation-Pesticide Use Reporting (DPR-PUR) because these data are more complete than DMRK survey data. National and state annual atrazine-use totals derived by this method were compared with other published pesticide-use estimates and were highly correlated. The method developed is designed to be applicable to other pesticides for which there are similar data; however, for some pesticides that are applied to specialty crops, fewer surveys are usually available to estimate application rates and there are a greater number of years with unreported crop acreage, potentially resulting in greater uncertainty in use
A new radial strain and strain rate estimation method using autocorrelation for carotid artery
NASA Astrophysics Data System (ADS)
Ye, Jihui; Kim, Hoonmin; Park, Jongho; Yeo, Sunmi; Shim, Hwan; Lim, Hyungjoon; Yoo, Yangmo
2014-03-01
Atherosclerosis is a leading cause of cardiovascular disease. The early diagnosis of atherosclerosis is of clinical interest since it can prevent any adverse effects of atherosclerotic vascular diseases. In this paper, a new carotid artery radial strain estimation method based on autocorrelation is presented. In the proposed method, the strain is first estimated by the autocorrelation of two complex signals from the consecutive frames. Then, the angular phase from autocorrelation is converted to strain and strain rate and they are analyzed over time. In addition, a 2D strain image over region of interest in a carotid artery can be displayed. To evaluate the feasibility of the proposed radial strain estimation method, radiofrequency (RF) data of 408 frames in the carotid artery of a volunteer were acquired by a commercial ultrasound system equipped with a research package (V10, Samsung Medison, Korea) by using a L5-13IS linear array transducer. From in vivo carotid artery data, the mean strain estimate was -0.1372 while its minimum and maximum values were -2.961 and 0.909, respectively. Moreover, the overall strain estimates are highly correlated with the reconstructed M-mode trace. Similar results were obtained from the estimation of the strain rate change over time. These results indicate that the proposed carotid artery radial strain estimation method is useful for assessing the arterial wall's stiffness noninvasively without increasing the computational complexity.
Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2017-07-01
A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate. © 2017 American Academy of Forensic Sciences.
Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.
2011-01-01
Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.
An estimate for the thermal photon rate from lattice QCD
NASA Astrophysics Data System (ADS)
Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman
2018-03-01
We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
2017-08-19
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Estimating Arrhenius parameters using temperature programmed molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight variousmore » aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.« less
Bounded Kalman filter method for motion-robust, non-contact heart rate estimation
Prakash, Sakthi Kumar Arul; Tucker, Conrad S.
2018-01-01
The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
The composite dynamic method as evidence for age-specific waterfowl mortality
Burnham, Kenneth P.; Anderson, David R.
1979-01-01
For the past 25 years estimation of mortality rates for waterfowl has been based almost entirely on the composite dynamic life table. We examined the specific assumptions for this method and derived a valid goodness of fit test. We performed this test on 45 data sets representing a cross section of banded sampled for various waterfowl species, geographic areas, banding periods, and age/sex classes. We found that: (1) the composite dynamic method was rejected (P <0.001) in 37 of the 45 data sets (in fact, 29 were rejected at P <0.00001) and (2) recovery and harvest rates are year-specific (a critical violation of the necessary assumptions). We conclude that the restrictive assumptions required for the composite dynamic method to produce valid estimates of mortality rates are not met in waterfowl data. Also we demonstrate that even when the required assumptions are met, the method produces very biased estimates of age-specific mortality rates. We believe the composite dynamic method should not be used in the analysis of waterfowl banding data. Furthermore, the composite dynamic method does not provide valid evidence for age-specific mortality rates in waterfowl.
On the estimation of spread rate for a biological population
Jim Clark; Lajos Horváth; Mark Lewis
2001-01-01
We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.
A comparative review of estimates of the proportion unchanged genes and the false discovery rate
Broberg, Per
2005-01-01
Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR) and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information regarding false positive and negative rates as well as the proportion unchanged when identifying changed genes. PMID:16086831
Estimating 1 min rain rate distributions from numerical weather prediction
NASA Astrophysics Data System (ADS)
Paulson, Kevin S.
2017-01-01
Internationally recognized prognostic models of rain fade on terrestrial and Earth-space EHF links rely fundamentally on distributions of 1 min rain rates. Currently, in Rec. ITU-R P.837-6, these distributions are generated using the Salonen-Poiares Baptista method where 1 min rain rate distributions are estimated from long-term average annual accumulations provided by numerical weather prediction (NWP). This paper investigates an alternative to this method based on the distribution of 6 h accumulations available from the same NWPs. Rain rate fields covering the UK, produced by the Nimrod network of radars, are integrated to estimate the accumulations provided by NWP, and these are linked to distributions of fine-scale rain rates. The proposed method makes better use of the available data. It is verified on 15 NWP regions spanning the UK, and the extension to other regions is discussed.
NASA Technical Reports Server (NTRS)
Zahorian, Stephen A. (Inventor); Livingston, David L. (Inventor); Pretlow, III, Robert A. (Inventor)
1996-01-01
An apparatus for acquiring signals emitted by a fetus, identifying fetal heart beats and determining a fetal heart rate. Multiple sensor signals are outputted by a passive fetal heart rate monitoring sensor. Multiple parallel nonlinear filters filter these multiple sensor signals to identify fetal heart beats in the signal data. A processor determines a fetal heart rate based on these identified fetal heart beats. The processor includes the use of a figure of merit weighting of heart rate estimates based on the identified heart beats from each filter for each signal. The fetal heart rate thus determined is outputted to a display, storage, or communications channel. A method for enhanced fetal heart beat discrimination includes acquiring signals from a fetus, identifying fetal heart beats from the signals by multiple parallel nonlinear filtering, and determining a fetal heart rate based on the identified fetal heart beats. A figure of merit operation in this method provides for weighting a plurality of fetal heart rate estimates based on the identified fetal heart beats and selecting the highest ranking fetal heart rate estimate.
NASA Technical Reports Server (NTRS)
Zahorian, Stephen A. (Inventor); Livingston, David L. (Inventor); Pretlow, Robert A., III (Inventor)
1994-01-01
An apparatus for acquiring signals emitted by a fetus, identifying fetal heart beats and determining a fetal heart rate is presented. Multiple sensor signals are outputted by a passive fetal heart rate monitoring sensor. Multiple parallel nonlinear filters filter these multiple sensor signals to identify fetal heart beats in the signal data. A processor determines a fetal heart rate based on these identified fetal heart beats. The processor includes the use of a figure of merit weighting of heart rate estimates based on the identified heart beats from each filter for each signal. The fetal heart rate thus determined is outputted to a display, storage, or communications channel. A method for enhanced fetal heart beat discrimination includes acquiring signals from a fetus, identifying fetal heart beats from the signals by multiple parallel nonlinear filtering, and determining a fetal heart rate based on the identified fetal heart beats. A figure of merit operation in this method provides for weighting a plurality of fetal heart rate estimates based on the identified fetal heart beats and selecting the highest ranking fetal heart rate estimate.
Characterizing source-sink dynamics with genetic parentage assignments
M. Zachariah Peery; Steven R. Beissinger; Roger F. House; Martine Berube; Laurie A. Hall; Anna Sellas; Per J. Palsboll
2008-01-01
Source-sink dynamics have been suggested to characterize the population structure of many species, but the prevalence of source-sink systems in nature is uncertain because of inherent challenges in estimating migration rates among populations. Migration rates are often difficult to estimate directly with demographic methods, and indirect genetic methods are subject to...
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Salem, Jonathan A.; Holland, Frederic A.
1997-01-01
The two estimation methods, individual data and arithmetic mean methods, were used to determine the slow crack growth (SCG) parameters (n and D) of advanced ceramics and glass from a large number of room- and elevated-temperature constant stress-rate ('dynamic fatigue') test data. For ceramic materials with Weibull modulus greater than 10, the difference in the SCG parameters between the two estimation methods was negligible; whereas, for glass specimens exhibiting Weibull modulus of about 3, the difference was amplified, resulting in a maximum difference of 16 and 13 %, respectively, in n and D. Of the two SCG parameters, the parameter n was more sensitive to the estimation method than the other. The coefficient of variation in n was found to be somewhat greater in the individual data method than in the arithmetic mean method.
Constructing stage-structured matrix population models from life tables: comparison of methods
Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate (λ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism’s life history type and the true values of λ. In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ. Here, the weights are given by survivorship curve after discounting with λ. To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses. PMID:29085763
Constructing stage-structured matrix population models from life tables: comparison of methods.
Fujiwara, Masami; Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate ( λ ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism's life history type and the true values of λ . In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ . Here, the weights are given by survivorship curve after discounting with λ . To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Morita, K; Uchiyama, Y; Tominaga, S
1987-06-01
In order to evaluate the treatment results of radiotherapy it is important to estimate the degree of complications of the surrounding normal tissues as well as the frequency of tumor control. In this report, the cumulative incidence rate of the late radiation injuries of the normal tissues was calculated using the modified actuarial method (Cutler-Ederer's method) or Kaplan-Meier's method, which is usually applied to the calculation of the survival rate. By the use of this method of calculation, an accurate cumulative incidence rate over time can be easily obtained and applied to the statistical evaluation of the late radiation injuries.
Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran
2014-03-01
Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.
Estimating spread rates of non-native species: the gypsy moth as a case study
Patrick Tobin; Andrew M. Liebhold; E. Anderson Roberts; Laura M. Blackburn
2015-01-01
Estimating rates of spread and generating projections of future range expansion for invasive alien species is a key process in the development of management guidelines and policy. Critical needs to estimate spread rates include the availability of surveys to characterize the spatial distribution of an invading species and the application of analytical methods to...
NASA Astrophysics Data System (ADS)
Jiang, Yan; Zemp, Roger
2018-01-01
The metabolic rate of oxygen consumption is an important metric of tissue oxygen metabolism and is especially critical in the brain, yet few methods are available for measuring it. We use a custom combined photoacoustic-microultrasound system and demonstrate cerebral oxygen consumption estimation in vivo. In particular, the cerebral metabolic rate of oxygen consumption was estimated in a murine model during variation of inhaled oxygen from hypoxia to hyperoxia. The hypothesis of brain autoregulation was confirmed with our method even though oxygen saturation and flow in vessels changed.
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
Estimation of particulate nutrient load using turbidity meter.
Yamamoto, K; Suetsugi, T
2006-01-01
The "Nutrient Load Hysteresis Coefficient" was proposed to evaluate the hysteresis of the nutrient loads to flow rate quantitatively. This could classify the runoff patterns of nutrient load into 15 patterns. Linear relationships between the turbidity and the concentrations of particulate nutrients were observed. It was clarified that the linearity was caused by the influence of the particle size on turbidity output and accumulation of nutrients on smaller particles (diameter < 23 microm). The L-Q-Turb method, which is a new method for the estimation of runoff loads of nutrients using a regression curve between the turbidity and the concentrations of particulate nutrients, was developed. This method could raise the precision of the estimation of nutrient loads even if they had strong hysteresis to flow rate. For example, as for the runoff load of total phosphorus load on flood events in a total of eight cases, the averaged error of estimation of total phosphorus load by the L-Q-Turb method was 11%, whereas the averaged estimation error by the regression curve between flow rate and nutrient load was 28%.
Inferring invasive species abundance using removal data from management actions
Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.
2016-01-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.
NASA Technical Reports Server (NTRS)
Nichols, Jonathan E.; Peteet, Dorothy M.; Frolking, Steve; Karavias, John
2017-01-01
Arctic peatlands are an important part of the global carbon cycle, accumulating atmospheric carbon as organic matter since the Late glacial. Current methods for understanding the changing efficiency of the peatland carbon sink rely on peatlands with an undisturbed stratigraphy. Here we present a method of estimating primary carbon accumulation rate from a site where permafrost processes have either vertically or horizontally translocated nearby carbon-rich sediment out of stratigraphic order. Briefly, our new algorithm estimates the probability of the age of deposition of a random increment of sediment in the core. The method assumes that if sediment age is measured at even depth increments, dates are more likely to occur during intervals of higher accumulation rate and vice versa. Multiplying estimated sedimentation rate by measured carbon density yields carbon accumulation rate. We perform this analysis at the Imnavait Creek Peatland, near the Arctic Long Term Ecological Research network site at Toolik Lake, Alaska. Using classical radiocarbon age modeling, we find unreasonably high rates of carbon accumulation at various Holocene intervals. With our new method, we find accumulation rate changes that are in improved agreement within the context of other sites throughout Alaska and the rest of the Circum-Arctic region.
The limits on the usefulness of erosion hazard ratings
R. M. Rice; P. D. Gradek
1984-01-01
Although erosion-hazard ratings are often used to guide forest practices, those used in California from 1974 to 1982 have been inadequate for estimating erosion potential. To improve the erosion-hazard rating procedure, separate estimating equations were used for different situations. The ratings were partitioned according to yarding method, erosional process, and...
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Yang, Ping; Dumont, Guy A; Ansermino, J Mark
2009-04-01
Intraoperative heart rate is routinely measured independently from the ECG monitor, pulse oximeter, and the invasive blood pressure monitor if available. The presence of artifacts, in one or more of theses signals, especially sustained artifacts, represents a critical challenge for physiological monitoring. When temporal filters are used to suppress sustained artifacts, unwanted delays or signal distortion are often introduced. The aim of this study was to remove artifacts and derive accurate estimates for the heart rate signal by using measurement redundancy. Heart rate measurements from multiple sensors and previous estimates that fall in a short moving window were treated as samples of the same heart rate. A hybrid median filter was used to align these samples into one ordinal series and to select the median as the fused estimate. This method can successfully remove artifacts that are sustained for shorter than half the length of the filter window, or artifacts that are sustained for a longer duration but presented in no more than half of the sensors. The method was tested on both simulated and clinical cases. The performance of the hybrid median filter in the simulated study was compared with that of a two-step estimation process, comprising a threshold-controlled artifact-removal module and a Kalman filter. The estimation accuracy of the hybrid median filter is better than that of the Kalman filter in the presence of artifacts. The hybrid median filter combines the structural and temporal information from two or more sensors and generates a robust estimate of heart rate without requiring strict assumptions about the signal's characteristics. This method is intuitive, computationally simple, and the performance can be easily adjusted. These considerable benefits make this method highly suitable for clinical use.
NASA Astrophysics Data System (ADS)
Cullen, J. J.
2016-02-01
During the 1980s, estimates of primary productivity and the growth rates of phytoplankton in oligotrophic waters were controversial, in part because rates based on seasonal accumulations of oxygen in the shallow oxygen maximum were reported to be much higher than could be accounted for with measurements of photosynthesis based on incubations with C-14. Since then, much has changed: tested and standardized methods have been employed to collect comprehensive time-series observations of primary production and related oceanographic properties in oligotrophic waters of the North Pacific subtropical gyre and the Sargasso Sea; technical and theoretical advances have led to new tracer-based estimates of photosynthesis (e.g., oxygen/argon and triple isotopes of dissolved oxygen); and biogeochemical sensor systems on ocean gliders and profiling floats can describe with unprecedented resolution the dynamics of phytoplankton, oxygen and nitrate as driven by growth, loss processes including grazing, and vertical migration for nutrient acquisition. Meanwhile, the estimation of primary productivity, phytoplankton biomass and phytoplankton growth rates from remote sensing of ocean color has matured, complementing biogeochemical models that describe and predict these key properties of plankton dynamics. In a selective review focused on well-studied oligotrophic waters, I compare methods for estimating the primary productivity and growth rates of phytoplankton to see if they are converging on agreement, not only in the estimated rates, but also in the underlying assumptions, such as the ratio of gross- to net primary production — and how this relates to the measurement — and the ratio of chlorophyll to carbon in phytoplankton. Examples of agreement are encouraging, but some stark contrasts illustrate the need for improved mechanistic understanding of exactly what each method is measuring.
Variability in nest survival rates and implications to nesting studies
Klett, A.T.; Johnson, D.H.
1982-01-01
We used four reasonably large samples (83-213) of Mallard (Anas platyrhynchos) and Blue-winged Teal (A. discors) nests on an interstate highway right-of-way in southcentral North Dakota to evaluate potential biases in hatch-rate estimates. Twelve consecutive, weekly searches for nests were conducted with a cable-chain drag in 1976 and 1977. Nests were revisited at weekly intervals. Four methods were used to estimate hatch rates for the four data sets: the Traditional Method, the Mayfield Method, and two modifications of the Mayfield Method that are sometimes appropriate when daily mortality rates of nests are not constant. Hatch rates and the average age of nests at discovery declined as the interval between searches decreased, suggesting that mortality rates were not constant in our samples. An analysis of variance indicated that daily mortality rates varied with the age of nests in all four samples. Mortality was generally highest during the early laying period, moderately high during the late laying period, and lowest during incubation. We speculate that this relationship of mortality to nest age might be due to the presence of hens at nests or to differences in the vulnerability of nest sites to predation. A modification of the Mayfield Method that accounts for age-related variation in nest mortality was most appropriate for our samples. We suggest methods for conducting nesting studies and estimating nest success for species possessing similar nesting habits.
Construction and demolition waste generation rates for high-rise buildings in Malaysia.
Mah, Chooi Mei; Fujiwara, Takeshi; Ho, Chin Siong
2016-12-01
Construction and demolition waste continues to sharply increase in step with the economic growth of less developed countries. Though the construction industry is large, it is composed of small firms with individual waste management practices, often leading to the deleterious environmental outcomes. Quantifying construction and demolition waste generation allows policy makers and stakeholders to understand the true internal and external costs of construction, providing a necessary foundation for waste management planning that may overcome deleterious environmental outcomes and may be both economically and environmentally optimal. This study offers a theoretical method for estimating the construction and demolition project waste generation rate by utilising available data, including waste disposal truck size and number, and waste volume and composition. This method is proposed as a less burdensome and more broadly applicable alternative, in contrast to waste estimation by on-site hand sorting and weighing. The developed method is applied to 11 projects across Malaysia as the case study. This study quantifies waste generation rate and illustrates the construction method in influencing the waste generation rate, estimating that the conventional construction method has a waste generation rate of 9.88 t 100 m -2 , the mixed-construction method has a waste generation rate of 3.29 t 100 m -2 , and demolition projects have a waste generation rate of 104.28 t 100 m -2 . © The Author(s) 2016.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Kong, Weidong; Brundage, Michael
Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT{sub 1Y}) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT{sub LIFETIME}) was estimated by the multicohort utilization tablemore » method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT{sub 1Y}, observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT{sub LIFETIME} was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT{sub LIFETIME} for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT{sub LIFETIME} for 5 selected sites differed widely. RT{sub LIFETIME} in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario.« less
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
2017-01-01
OBJECTIVES Rates of attempted deliberate self-poisoning (DSP) are subject to undercounting, underreporting, and denial of the suicide attempt. In this study, we estimated the rate of underreported DSP, which is the most common method of attempted suicide in Iran. METHODS We estimated the rate and number of unaccounted individuals who attempted DSP in western Iran in 2015 using a truncated count model. In this method, the number of people who attempted DSP but were not referred to any health care centers, n0, was calculated through integrating hospital and forensic data. The crude and age-adjusted rates of attempted DSP were estimated directly using the average population size of the city of Kermanshah and the World Health Organization (WHO) world standard population with and without accounting for underreporting. The Monte Carlo method was used to determine the confidence level. RESULTS The recorded number of people who attempted DSP was estimated by different methods to be in the range of 46.6 to 53.2% of the actual number of individuals who attempted DSP. The rate of underreported cases was higher among women than men and decreased as age increased. The rate of underreported cases decreased as the potency and intensity of toxic factors increased. The highest underreporting rates of 69.9, 51.2, and 21.5% were observed when oil and detergents (International Classification of Diseases, 10th revision [ICD-10] code: X66), medications (ICD-10 code: X60-X64), and agricultural toxins (ICD-10 codes: X68, X69) were used for poisoning, respectively. Crude rates, with and without accounting for underreporting, were estimated by the mixture method as 167.5 per 100,000 persons and 331.7 per 100,000 persons, respectively, which decreased to 129.8 per 100,000 persons and 253.1 per 100,000 persons after adjusting for age on the basis of the WHO world standard population. CONCLUSIONS Nearly half of individuals who attempted DSP were not referred to a hospital for treatment or denied the suicide attempt for political or sociocultural reasons. Individuals with no access to counseling services are at a higher risk for repeated suicide attempts and fatal suicides. PMID:28728353
A method of estimating the knock rating of hydrocarbon fuel blend
NASA Technical Reports Server (NTRS)
Sanders, Newell D
1943-01-01
The usefulness of the knock ratings of pure hydrocarbon compounds would be increased if some reliable method of calculating the knock ratings of fuel blends was known. The purpose of this study was to investigate the possibility of developing a method of predicting the knock ratings of fuel blends.
A Method of Estimating the Knock Rating of Hydrocarbon Fuel Blends
NASA Technical Reports Server (NTRS)
Sanders, Newell D.
1943-01-01
The usefulness of the knock ratings of pure hydrocarbon compounds would be increased if some reliable method of calculating the knock ratings of fuel blends was known. The purpose of this study was to investigate the possibility of developing a method of predicting the knock ratings of fuel blends.
Persons Camp Using Interpolation Method
NASA Astrophysics Data System (ADS)
Tawfiq, Luma Naji Mohammed; Najm Abood, Israa
2018-05-01
The aim of this paper is to estimate the rate of contaminated soils by using suitable interpolation method as an alternative accurate tool to evaluate the concentration of heavy metals in soil then compared with standard universal value to determine the rate of contamination in the soil. In particular, interpolation methods are extensively applied in the models of the different phenomena where experimental data must be used in computer studies where expressions of those data are required. In this paper the extended divided difference method in two dimensions is used to solve suggested problem. Then, the modification method is applied to estimate the rate of contaminated soils of displaced persons camp in Diyala Governorate, in Iraq.
Conroy, M.J.; Nichols, J.D.
1984-01-01
Several important questions in evolutionary biology and paleobiology involve sources of variation in extinction rates. In all cases of which we are aware, extinction rates have been estimated from data in which the probability that an observation (e.g., a fossil taxon) will occur is related both to extinction rates and to what we term encounter probabilities. Any statistical method for analyzing fossil data should at a minimum permit separate inferences on these two components. We develop a method for estimating taxonomic extinction rates from stratigraphic range data and for testing hypotheses about variability in these rates. We use this method to estimate extinction rates and to test the hypothesis of constant extinction rates for several sets of stratigraphic range data. The results of our tests support the hypothesis that extinction rates varied over the geologic time periods examined. We also present a test that can be used to identify periods of high or low extinction probabilities and provide an example using Phanerozoic invertebrate data. Extinction rates should be analyzed using stochastic models, in which it is recognized that stratigraphic samples are random varlates and that sampling is imperfect
NASA Technical Reports Server (NTRS)
Mack, R. A.; Wylie, D. P.
1982-01-01
A technique was developed for estimating the condensation rates of convective storms using satellite measurements of cirrus anvil expansion rates and radiosonde measurements of environmental water vapor. Three cases of severe convection in Oklahoma were studied and a diagnostic model was developed for integrating radiosonde data with satellite data. Two methods were used to measure the anvil expansion rates - the expansion of isotherm contours on infrared images, and the divergent motions of small brightness anomalies tracked on the visible images. The differences between the two methods were large as the storms developed, but these differences became small in the latter stage of all three storms. A comparison between the three storms indicated that the available moisture in the lowest levels greatly affected the rain rates of the storms. This was evident from both the measured rain rates of the storms and the condensation rates estimated by the model. The possibility of using this diagnostic model for estimating the intensities of convective storms also is discussed.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
Park, Soon-Ung; Lee, In-Hye; Ju, Jae-Won; Joo, Seung Jin
2016-10-01
A methodology for the estimation of the emission rate of 137 Cs by the Lagrangian Particle Dispersion Model (LPDM) with the use of monitored 137 Cs concentrations around a nuclear power plant has been developed. This method has been employed with the MM5 meteorological model in the 600 km × 600 km model domain with the horizontal grid scale of 3 km × 3 km centered at the Fukushima nuclear power plant to estimate 137 Cs emission rate for the accidental period from 00 UTC 12 March to 00 UTC 6 April 2011. The Lagrangian Particles are released continuously with the rate of one particle per minute at the first level modelled, about 15 m above the power plant site. The presently developed method was able to simulate quite reasonably the estimated 137 Cs emission rate compared with other studies, suggesting the potential usefulness of the present method for the estimation of the emission rate from the accidental power plant without detailed inventories of reactors and fuel assemblies and spent fuels. The advantage of this method is not so complicated but can be applied only based on one-time forward LPDM simulation with monitored concentrations around the power plant, in contrast to other inverse models. It was also found that continuously monitored radionuclides concentrations from possibly many sites located in all directions around the power plant are required to get accurate continuous emission rates from the accident power plant. The current methodology can also be used to verify the previous version of radionuclides emissions used among other modeling groups for the cases of intermittent or discontinuous samplings. Copyright © 2016. Published by Elsevier Ltd.
Liu, Xuenan; Yang, Xuezhi; Jin, Jing; Li, Jiangshan
2018-06-05
Recent researches indicate that facial epidermis color varies with the rhythm of heat beats. It can be captured by consumer-level cameras and, astonishingly, be adopted to estimate heart rate (HR). The HR estimated remains not as precise as required in practical environment where illumination interference, facial expressions, or motion artifacts are involved, though numerous methods have been proposed in the last few years. A novel algorithm is proposed to make non-contact HR estimation technique more robust. First, the face of subject is detected and tracked to follow the head movement. The facial region then falls into several blocks, and the chrominance feature of each block is extracted to establish raw HR sub-signal. Self-adaptive signals separation (SASS) is performed to separate the noiseless HR sub-signals from raw sub-signals. On that basis, the noiseless sub-signals full of HR information are selected using weight-based scheme to establish the holistic HR signal, from which average HR is computed adopting wavelet transform and data filter. Forty subjects take part in our experiments, whose facial videos are recorded by a normal webcam with the frame rate of 30 fps under ambient lighting conditions. The average HR estimated by our method correlates strongly with ground truth measurements, as indicated in experimental results measured in static scenario with the Pearson's correlation r=0.980 and dynamic scenario with the Pearson's correlation r=0.897. Our method, compared to the newest method, decreases the error rate by 38.63% and increases the Pearson's correlation by 15.59%, indicating that our method evidently outperforms state-of-the-art non-contact HR estimation methods in realistic environments. © 2018 Institute of Physics and Engineering in Medicine.
Introduction to State Estimation of High-Rate System Dynamics.
Hong, Jonathan; Laflamme, Simon; Dodson, Jacob; Joyce, Bryan
2018-01-13
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer's convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model.
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
Reconstruction and analysis of 137Cs fallout deposition patterns in the Marshall Islands.
Whitcomb, Robert C
2002-03-01
Estimates of 137Cs deposition caused by fallout originating from nuclear weapons testing in the Marshall Islands have been estimated for several locations in the Marshall Islands. These retrospective estimates are based primarily on historical exposure rate and gummed film measurements. The methods used to reconstruct these deposition estimates are similar to those used in the National Cancer Institute study for reconstructing 131I deposition from the Nevada Test Site. Reconstructed cumulative deposition estimates are validated against contemporary measurements of 137Cs concentration in soil with account taken for estimated global fallout contributions. These validations show that the overall geometric bias in predicted-to-observed (P:O) ratios is 1.0 (indicating excellent agreement). The 5th to 95th percentile range of this distribution is 0.35-2.95. The P:O ratios for estimates using historical gummed film measurements tend to slightly overpredict more than estimates using exposure rate measurements. The deposition estimate methods, supported by the agreement between estimates and measurements, suggest that these methods can be used with confidence for other weapons testing fallout radionuclides.
NASA Technical Reports Server (NTRS)
Williams, R. E.; Kruger, R.
1980-01-01
Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.
Robust PV Degradation Methodology and Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Dirk; Deline, Christopher A; Kurtz, Sarah
The degradation rate plays an important role in predicting and assessing the long-term energy generation of PV systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this manuscript, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year (YOY) rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Robust PV Degradation Methodology and Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Dirk C.; Deline, Chris; Kurtz, Sarah R.
The degradation rate plays an important role in predicting and assessing the long-term energy generation of photovoltaics (PV) systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this paper, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Robust PV Degradation Methodology and Application
Jordan, Dirk C.; Deline, Chris; Kurtz, Sarah R.; ...
2017-12-21
The degradation rate plays an important role in predicting and assessing the long-term energy generation of photovoltaics (PV) systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this paper, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Cadwell, Betsy L; Boyle, James P; Tierney, Edward F; Thompson, Theodore J
2007-09-01
Some states' death certificate form includes a diabetes yes/no check box that enables policy makers to investigate the change in heart disease mortality rates by diabetes status. Because the check boxes are sometimes unmarked, a method accounting for missing data is needed when estimating heart disease mortality rates by diabetes status. Using North Dakota's data (1992-2003), we generate the posterior distribution of diabetes status to estimate diabetes status among those with heart disease and an unmarked check box using Monte Carlo methods. Combining this estimate with the number of death certificates with known diabetes status provides a numerator for heart disease mortality rates. Denominators for rates were estimated from the North Dakota Behavioral Risk Factor Surveillance System. Accounting for missing data, age-adjusted heart disease mortality rates (per 1,000) among women with diabetes were 8.6 during 1992-1998 and 6.7 during 1999-2003. Among men with diabetes, rates were 13.0 during 1992-1998 and 10.0 during 1999-2003. The Bayesian approach accounted for the uncertainty due to missing diabetes status as well as the uncertainty in estimating the populations with diabetes.
NASA Astrophysics Data System (ADS)
Grismer, Mark E.; Bachman, S.; Powers, T.
2000-10-01
We assess the relative merits of application of the most commonly used field methods (soil-water balance (SWB), chloride mass balance (CMB) and soil moisture monitoring (NP)) to determine recharge rates in micro-irrigated and non-irrigated areas of a semi-arid coastal orchard located in a relatively complex geological environment.Application of the CMB method to estimate recharge rates was difficult owing to the unusually high, variable soil-water chloride concentrations. In addition, contrary to that expected, the chloride concentration distribution at depths below the root zone in the non-irrigated soil profiles was greater than that in the irrigated profiles. The CMB method severely underestimated recharge rates in the non-irrigated areas when compared with the other methods, although the CMB method estimated recharge rates for the irrigated areas, that were similar to those from the other methods, ranging from 42 to 141 mm/year.The SWB method, constructed for a 15-year period, provided insight into the recharge process being driven by winter rains rather than summer irrigation and indicated an average rate of 75 mm/year and 164 mm/year for the 1984 - 98 and 1996 - 98 periods, respectively. Assuming similar soil-water holding capacity, these recharge rates applied to both irrigated and non-irrigated areas. Use of the long period of record was important because it encompassed both drought and heavy rainfall years. Successful application of the SWB method, however, required considerable additional field measurements of orchard ETc, soil-water holding capacity and estimation of rainfall interception - runoff losses.Continuous soil moisture monitoring (NP) was necessary to identify both daily and seasonal seepage processes to corroborate the other recharge estimates. Measured recharge rates during the 1996 - 1998 period in both the orchards and non-irrigated site averaged 180 mm/year. The pattern of soil profile drying during the summer irrigation season, followed by progressive wetting during the winter rainy season was observed in both irrigated and non-irrigated soil profiles, confirming that groundwater recharge was rainfall driven and that micro-irrigation did not predispose the soil profile to excess rainfall recharge. The ability to make this recharge assessment, however, depended on making multiple field measurements associated with all three methods, suggesting that any one should not be used alone.
Lin, Tin-Chi; Marucci-Wellman, Helen R; Willetts, Joanna L; Brennan, Melanye J; Verma, Santosh K
2016-12-01
A common issue in descriptive injury epidemiology is that in order to calculate injury rates that account for the time spent in an activity, both injury cases and exposure time of specific activities need to be collected. In reality, few national surveys have this capacity. To address this issue, we combined statistics from two different national complex surveys as inputs for the numerator and denominator to estimate injury rate, accounting for the time spent in specific activities and included a procedure to estimate variance using the combined surveys. The 2010 National Health Interview Survey (NHIS) was used to quantify injuries, and the 2010 American Time Use Survey (ATUS) was used to quantify time of exposure to specific activities. The injury rate was estimated by dividing the average number of injuries (from NHIS) by average exposure hours (from ATUS), both measured for specific activities. The variance was calculated using the 'delta method', a general method for variance estimation with complex surveys. Among the five types of injuries examined, 'sport and exercise' had the highest rate (12.64 injuries per 100 000 h), followed by 'working around house/yard' (6.14), driving/riding a motor vehicle (2.98), working (1.45) and sleeping/resting/eating/drinking (0.23). The results show a ranking of injury rate by activity quite different from estimates using population as the denominator. Our approach produces an estimate of injury risk which includes activity exposure time and may more reliably reflect the underlying injury risks, offering an alternative method for injury surveillance and research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
NASA Technical Reports Server (NTRS)
Choi, Sung H.; Salem, J. A.; Nemeth, N. N.
1998-01-01
High-temperature slow-crack-growth behaviour of hot-pressed silicon carbide was determined using both constant-stress-rate ("dynamic fatigue") and constant-stress ("static fatigue") testing in flexure at 1300 C in air. Slow crack growth was found to be a governing mechanism associated with failure of the material. Four estimation methods such as the individual data, the Weibull median, the arithmetic mean and the median deviation methods were used to determine the slow crack growth parameters. The four estimation methods were in good agreement for the constant-stress-rate testing with a small variation in the slow-crack-growth parameter, n, ranging from 28 to 36. By contrast, the variation in n between the four estimation methods was significant in the constant-stress testing with a somewhat wide range of n= 16 to 32.
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Growth of saprotrophic fungi and bacteria in soil.
Rousk, Johannes; Bååth, Erland
2011-10-01
Bacterial and fungal growth rate measurements are sensitive variables to detect changes in environmental conditions. However, while considerable progress has been made in methods to assess the species composition and biomass of fungi and bacteria, information about growth rates remains surprisingly rudimentary. We review the recent history of approaches to assess bacterial and fungal growth rates, leading up to current methods, especially focusing on leucine/thymidine incorporation to estimate bacterial growth and acetate incorporation into ergosterol to estimate fungal growth. We present the underlying assumptions for these methods, compare estimates of turnover times for fungi and bacteria based on them, and discuss issues, including for example elusive conversion factors. We review what the application of fungal and bacterial growth rate methods has revealed regarding the influence of the environmental factors of temperature, moisture (including drying/rewetting), pH, as well as the influence of substrate additions, the presence of plants and toxins. We highlight experiments exploring the competitive and facilitative interaction between bacteria and fungi enabled using growth rate methods. Finally, we predict that growth methods will be an important complement to molecular approaches to elucidate fungal and bacterial ecology, and we identify methodological concerns and how they should be addressed. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer.
Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua
2016-04-14
A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper.
Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer
Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua
2016-01-01
A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper. PMID:27089347
Measuring zebrafish turning rate.
Mwaffo, Violet; Butail, Sachit; di Bernardo, Mario; Porfiri, Maurizio
2015-06-01
Zebrafish is becoming a popular animal model in preclinical research, and zebrafish turning rate has been proposed for the analysis of activity in several domains. The turning rate is often estimated from the trajectory of the fish centroid that is output by commercial or custom-made target tracking software run on overhead videos of fish swimming. However, the accuracy of such indirect methods with respect to the turning rate associated with changes in heading during zebrafish locomotion is largely untested. Here, we compare two indirect methods for the turning rate estimation using the centroid velocity or position data, with full shape tracking for three different video sampling rates. We use tracking data from the overhead video recorded at 60, 30, and 15 frames per second of zebrafish swimming in a shallow water tank. Statistical comparisons of absolute turning rate across methods and sampling rates indicate that, while indirect methods are indistinguishable from full shape tracking, the video sampling rate significantly influences the turning rate measurement. The results of this study can aid in the selection of the video capture frame rate, an experimental design parameter in zebrafish behavioral experiments where activity is an important measure.
Brady, Eoghan; Hill, Kenneth
2017-01-01
Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.
Aramendi, E; Ayala, U; Irusta, U; Alonso, E; Eftestøl, T; Kramer-Johansen, J
2012-06-01
To demonstrate that the instantaneous chest compression rate can be accurately estimated from the transthoracic impedance (TTI), and that this estimated rate can be used in a method to suppress cardiopulmonary resuscitation (CPR) artefacts. A database of 372 records, 87 shockable and 285 non-shockable, from out-of-hospital cardiac arrest episodes, corrupted by CPR artefacts, was analysed. Each record contained the ECG and TTI obtained from the defibrillation pads and the compression depth (CD) obtained from a sternal CPR pad. The chest compression rates estimated using TTI and CD were compared. The CPR artefacts were then filtered using the instantaneous chest compression rates estimated from the TTI or CD signals. The filtering results were assessed in terms of the sensitivity and specificity of the shock advice algorithm of a commercial automated external defibrillator. The correlation between the mean chest compression rates estimated using TTI or CD was r=0.98 (95% confidence interval, 0.97-0.98). The sensitivity and specificity after filtering using CD were 95.4% (88.4-98.6%) and 87.0% (82.6-90.5%), respectively. The sensitivity and specificity after filtering using TTI were 95.4% (88.4-98.6%) and 86.3% (81.8-89.9%), respectively. The instantaneous chest compression rate can be accurately estimated from TTI. The sensitivity and specificity after filtering are similar to those obtained using the CD signal. Our CPR suppression method based exclusively on signals acquired through the defibrillation pads is as accurate as methods based on signals obtained from CPR feedback devices. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fujita, Kazuhiko; Otomaru, Maki; Lopati, Paeniu; Hosono, Takashi; Kayanne, Hajime
2016-03-01
Carbonate production by large benthic foraminifers is sometimes comparable to that of corals and coralline algae, and contributes to sedimentation on reef islands and beaches in the tropical Pacific. Population dynamic data, such as population density and size structure (size-frequency distribution), are vital for an accurate estimation of shell production of foraminifers. However, previous production estimates in tropical environments were based on a limited sampling period with no consideration of seasonality. In addition, no comparisons were made of various estimation methods to determine more accurate estimates. Here we present the annual gross shell production rate of Baculogypsina sphaerulata, estimated based on population dynamics studied over a 2-yr period on an ocean reef flat of Funafuti Atoll (Tuvalu, tropical South Pacific). The population density of B. sphaerulata increased from January to March, when northwest winds predominated and the study site was on the leeward side of reef islands, compared to other seasons when southeast trade winds predominated and the study site was on the windward side. This result suggested that wind-driven flows controlled the population density at the study site. The B. sphaerulata population had a relatively stationary size-frequency distribution throughout the study period, indicating no definite intensive reproductive period in the tropical population. Four methods were applied to estimate the annual gross shell production rates of B. sphaerulata. The production rates estimated by three of the four methods (using monthly biomass, life tables and growth increment rates) were in the order of hundreds of g CaCO3 m-2 yr-1 or cm-3 m-2 yr-1, and the simple method using turnover rates overestimated the values. This study suggests that seasonal surveys should be undertaken of population density and size structure as these can produce more accurate estimates of shell productivity of large benthic foraminifers.
Estimating mortality rates of adult fish from entrainment through the propellers of river towboats
Gutreuter, S.; Dettmers, J.M.; Wahl, David H.
2003-01-01
We developed a method to estimate mortality rates of adult fish caused by entrainment through the propellers of commercial towboats operating in river channels. The method combines trawling while following towboats (to recover a fraction of the kills) and application of a hydrodynamic model of diffusion (to estimate the fraction of the total kills collected in the trawls). The sampling problem is unusual and required quantifying relatively rare events. We first examined key statistical properties of the entrainment mortality rate estimators using Monte Carlo simulation, which demonstrated that a design-based estimator and a new ad hoc estimator are both unbiased and converge to the true value as the sample size becomes large. Next, we estimated the entrainment mortality rates of adult fishes in Pool 26 of the Mississippi River and the Alton Pool of the Illinois River, where we observed kills that we attributed to entrainment. Our estimates of entrainment mortality rates were 2.52 fish/km of towboat travel (80% confidence interval, 1.00-6.09 fish/km) for gizzard shad Dorosoma cepedianum, 0.13 fish/km (0.00-0.41) for skipjack herring Alosa chrysochloris, and 0.53 fish/km (0.00-1.33) for both shovelnose sturgeon Scaphirhynchus platorynchus and smallmouth buffalo Ictiobus bubalus. Our approach applies more broadly to commercial vessels operating in confined channels, including other large rivers and intracoastal waterways.
Estimation of heart rate variability using a compact radiofrequency motion sensor.
Sugita, Norihiro; Matsuoka, Narumi; Yoshizawa, Makoto; Abe, Makoto; Homma, Noriyasu; Otake, Hideharu; Kim, Junghyun; Ohtaki, Yukio
2015-12-01
Physiological indices that reflect autonomic nervous activity are considered useful for monitoring peoples' health on a daily basis. A number of such indices are derived from heart rate variability, which is obtained by a radiofrequency (RF) motion sensor without making physical contact with the user's body. However, the bulkiness of RF motion sensors used in previous studies makes them unsuitable for home use. In this study, a new method to measure heart rate variability using a compact RF motion sensor that is sufficiently small to fit in a user's shirt pocket is proposed. To extract a heart rate related component from the sensor signal, an algorithm that optimizes a digital filter based on the power spectral density of the signal is proposed. The signals of the RF motion sensor were measured for 29 subjects during the resting state and their heart rate variability was estimated from the measured signals using the proposed method and a conventional method. A correlation coefficient between true heart rate and heart rate estimated from the proposed method was 0.69. Further, the experimental results showed the viability of the RF sensor for monitoring autonomic nervous activity. However, some improvements such as controlling the direction of sensing were necessary for stable measurement. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, F. Landis
1997-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
A novel technique for fetal heart rate estimation from Doppler ultrasound signal
2011-01-01
Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764
Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.
2016-01-01
Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
Respiratory rate extraction from pulse oximeter and electrocardiographic recordings.
Lee, Jinseok; Florian, John P; Chon, Ki H
2011-11-01
We present an algorithm of respiratory rate extraction using particle filter (PF), which is applicable to both photoplethysmogram (PPG) and electrocardiogram (ECG) signals. For the respiratory rate estimation, 1 min data are analyzed with combination of a PF method and an autoregressive model where among the resultant coefficients, the corresponding pole angle with the highest magnitude is searched since this reflects the closest approximation of the true breathing rate. The PPG data were collected from 15 subjects with the metronome breathing rate ranging from 24 to 36 breaths per minute in the supine and upright positions. The ECG data were collected from 11 subjects with spontaneous breathing ranging from 36 to 60 breaths per minute during treadmill exercises. Our method was able to accurately extract respiratory rates for both metronome and spontaneous breathing even during strenuous exercises. More importantly, despite slow increases in breathing rates concomitant with greater exercise vigor with time, our method was able to accurately track these progressive increases in respiratory rates. We quantified the accuracy of our method by using the mean, standard deviation and interquartile range of the error rates which all reflected high accuracy in estimating the true breathing rates. We are not aware of any other algorithms that are able to provide accurate respiratory rates directly from either ECG signals or PPG signals with spontaneous breathing during strenuous exercises. Our method is near real-time realizable because the computational time on 1 min data segment takes only 10 ms on a 2.66 GHz Intel Core2 microprocessor; the data are subsequently shifted every 10 s to obtain near-continuous breathing rates. This is an attractive feature since most other techniques require offline data analyses to estimate breathing rates.
Reconstruction and analysis of cesium-137 fallout deposition patterns in the Marshall Islands
NASA Astrophysics Data System (ADS)
Whitcomb, Robert Cleckley, Jr.
Estimates of 137Cs deposition due to fallout originating from nuclear weapons testing in the Marshall Islands have been made for several locations in the Marshall Islands. These retrospective estimates were based primarily on historical exposure rate and gummed film measurements. The methods used to reconstruct these deposition estimates are specific for six of the Pacific tests. These methods are also similar to those used in the National Cancer Institute study for reconstructing 131I deposition from the Nevada Test Site. Reconstructed cumulative deposition estimates are validated against contemporary measurements of 137Cs concentration in soil. This validation work also includes an accounting for estimated global fallout contributions. These validations show that the overall geometric bias in predicted-to-observed (P/O) ratios is 1.0 (indicating excellent agreement). The 5th and 95th percentile range of this distribution is 0.35--2.95. The P/O ratios for estimates using historical gummed film measurements tend to slightly over-predict more than estimates using exposure rate measurements. The methods produce reasonable estimates of deposition confirming that radioactive fallout occurred at atolls further south of the four northern atolls recognized by the Department of Energy as being affected by fallout. The deposition estimate methods, supported by the very good agreement between estimates and measurements, suggest that these methods can be used for other weapons testing fallout radionuclides with confidence.
Pacini, Clare; Ajioka, James W; Micklem, Gos
2017-04-12
Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.
New insight in spiral drawing analysis methods - Application to action tremor quantification.
Legrand, André Pierre; Rivals, Isabelle; Richard, Aliénor; Apartis, Emmanuelle; Roze, Emmanuel; Vidailhet, Marie; Meunier, Sabine; Hainque, Elodie
2017-10-01
Spiral drawing is one of the standard tests used to assess tremor severity for the clinical evaluation of medical treatments. Tremor severity is estimated through visual rating of the drawings by movement disorders experts. Different approaches based on the mathematical signal analysis of the recorded spiral drawings were proposed to replace this rater dependent estimate. The objective of the present study is to propose new numerical methods and to evaluate them in terms of agreement with visual rating and reproducibility. Series of spiral drawings of patients with essential tremor were visually rated by a board of experts. In addition to the usual velocity analysis, three new numerical methods were tested and compared, namely static and dynamic unraveling, and empirical mode decomposition. The reproducibility of both visual and numerical ratings was estimated, and their agreement was evaluated. The statistical analysis demonstrated excellent agreement between visual and numerical ratings, and more reproducible results with numerical methods than with visual ratings. The velocity method and the new numerical methods are in good agreement. Among the latter, static and dynamic unravelling both display a smaller dispersion and are easier for automatic analysis. The reliable scores obtained through the proposed numerical methods allow considering that their implementation on a digitized tablet, be it connected with a computer or independent, provides an efficient automatic tool for tremor severity assessment. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
US County-Level Trends in Mortality Rates for Major Causes of Death, 1980-2014.
Dwyer-Lindgren, Laura; Bertozzi-Villa, Amelia; Stubbs, Rebecca W; Morozoff, Chloe; Kutz, Michael J; Huynh, Chantal; Barber, Ryan M; Shackelford, Katya A; Mackenbach, Johan P; van Lenthe, Frank J; Flaxman, Abraham D; Naghavi, Mohsen; Mokdad, Ali H; Murray, Christopher J L
2016-12-13
County-level patterns in mortality rates by cause have not been systematically described but are potentially useful for public health officials, clinicians, and researchers seeking to improve health and reduce geographic disparities. To demonstrate the use of a novel method for county-level estimation and to estimate annual mortality rates by US county for 21 mutually exclusive causes of death from 1980 through 2014. Redistribution methods for garbage codes (implausible or insufficiently specific cause of death codes) and small area estimation methods (statistical methods for estimating rates in small subpopulations) were applied to death registration data from the National Vital Statistics System to estimate annual county-level mortality rates for 21 causes of death. These estimates were raked (scaled along multiple dimensions) to ensure consistency between causes and with existing national-level estimates. Geographic patterns in the age-standardized mortality rates in 2014 and in the change in the age-standardized mortality rates between 1980 and 2014 for the 10 highest-burden causes were determined. County of residence. Cause-specific age-standardized mortality rates. A total of 80 412 524 deaths were recorded from January 1, 1980, through December 31, 2014, in the United States. Of these, 19.4 million deaths were assigned garbage codes. Mortality rates were analyzed for 3110 counties or groups of counties. Large between-county disparities were evident for every cause, with the gap in age-standardized mortality rates between counties in the 90th and 10th percentiles varying from 14.0 deaths per 100 000 population (cirrhosis and chronic liver diseases) to 147.0 deaths per 100 000 population (cardiovascular diseases). Geographic regions with elevated mortality rates differed among causes: for example, cardiovascular disease mortality tended to be highest along the southern half of the Mississippi River, while mortality rates from self-harm and interpersonal violence were elevated in southwestern counties, and mortality rates from chronic respiratory disease were highest in counties in eastern Kentucky and western West Virginia. Counties also varied widely in terms of the change in cause-specific mortality rates between 1980 and 2014. For most causes (eg, neoplasms, neurological disorders, and self-harm and interpersonal violence), both increases and decreases in county-level mortality rates were observed. In this analysis of US cause-specific county-level mortality rates from 1980 through 2014, there were large between-county differences for every cause of death, although geographic patterns varied substantially by cause of death. The approach to county-level analyses with small area models used in this study has the potential to provide novel insights into US disease-specific mortality time trends and their differences across geographic regions.
Inferring invasive species abundance using removal data from management actions.
Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M
2016-10-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Introduction to State Estimation of High-Rate System Dynamics
Dodson, Jacob; Joyce, Bryan
2018-01-01
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer’s convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model. PMID:29342855
Dependence of paracentric inversion rate on tract length.
York, Thomas L; Durrett, Rick; Nielsen, Rasmus
2007-04-03
We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted.
Dependence of paracentric inversion rate on tract length
York, Thomas L; Durrett, Rick; Nielsen, Rasmus
2007-01-01
Background We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. Results We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. Conclusion The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted. PMID:17407601
NASA Astrophysics Data System (ADS)
Demmers, T. G. M.; Burgess, L. R.; Short, J. L.; Phillips, V. R.; Clark, J. A.; Wathes, C. M.
A method has been developed to measure the emission rate of ammonia from naturally ventilated U.K. livestock buildings. The method is based on measurements of ammonia concentration and estimates of the ventilation rate of the building by continuous release of carbon monoxide tracer within the building. The tracer concentration is measured at nine positions in openings around the perimeter of the building, as well as around a ring sampling line. Two criteria were evaluated to decide whether, at any given time, a given opening in the building acted as an air inlet or as an air outlet. Carbon monoxide concentration difference across an opening was found to be a better criterion than the temperature difference across the opening. Ammonia concentrations were measured continuously at the sampling points using a chemiluminescence analyser. The method was applied to a straw-bedded beef unit and to a slurry-based dairy unit. Both buildings were of space-boarded construction. Ventilation rates estimated by the ring line sample were consistently higher than by the perimeter samples. During calm weather, the ventilation estimates by both samples were similar (10-20 air changes h -1). However, during windy conditions (>5 m s -1) the ventilation rate was overestimated by the ring line sample (average 100 air changes h -1) compared to the perimeter samples (average 50 air changes h -1). The difference was caused by incomplete mixing of the tracer within the building. The ventilation rate estimated from the perimeter samples was used for the calculation of the emission rate. Preliminary estimates of the ammonia emission factor were 6.0 kg NH 3 (500 kg live-weight) -1 (190 d) -1 for the slurry-based dairy unit and 3.7 for the straw-bedded beef unit.
The evolutionary rate dynamically tracks changes in HIV-1 epidemics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maljkovic-berry, Irina; Athreya, Gayathri; Daniels, Marcus
Large-sequence datasets provide an opportunity to investigate the dynamics of pathogen epidemics. Thus, a fast method to estimate the evolutionary rate from large and numerous phylogenetic trees becomes necessary. Based on minimizing tip height variances, we optimize the root in a given phylogenetic tree to estimate the most homogenous evolutionary rate between samples from at least two different time points. Simulations showed that the method had no bias in the estimation of evolutionary rates and that it was robust to tree rooting and topological errors. We show that the evolutionary rates of HIV-1 subtype B and C epidemics have changedmore » over time, with the rate of evolution inversely correlated to the rate of virus spread. For subtype B, the evolutionary rate slowed down and tracked the start of the HAART era in 1996. Subtype C in Ethiopia showed an increase in the evolutionary rate when the prevalence increase markedly slowed down in 1995. Thus, we show that the evolutionary rate of HIV-1 on the population level dynamically tracks epidemic events.« less
Evaporation estimates from the Dead Sea and their implications on its water balance
NASA Astrophysics Data System (ADS)
Oroud, Ibrahim M.
2011-12-01
The Dead Sea (DS) is a terminal hypersaline water body situated in the deepest part of the Jordan Valley. There is a growing interest in linking the DS to the open seas due to severe water shortages in the area and the serious geological and environmental hazards to its vicinity caused by the rapid level drop of the DS. A key issue in linking the DS with the open seas would be an accurate determination of evaporation rates. There exist large uncertainties of evaporation estimates from the DS due to the complex feedback mechanisms between meteorological forcings and thermophysical properties of hypersaline solutions. Numerous methods have been used to estimate current and historical (pre-1960) evaporation rates, with estimates differing by ˜100%. Evaporation from the DS is usually deduced indirectly using energy, water balance, or pan methods with uncertainty in many parameters. Accumulated errors resulting from these uncertainties are usually pooled into the estimates of evaporation rates. In this paper, a physically based method with minimum empirical parameters is used to evaluate historical and current evaporation estimates from the DS. The more likely figures for historical and current evaporation rates from the DS were 1,500-1,600 and 1,200-1,250 mm per annum, respectively. Results obtained are congruent with field observations and with more elaborate procedures.
A comment on "Novel scavenger removal trials increase wind turbine-caused avian fatality estimates"
Huso, Manuela M.P.; Erickson, Wallace P.
2013-01-01
In a recent paper, Smallwood et al. (2010) conducted a study to compare their “novel” approach to conducting carcass removal trials with what they term the “conventional” approach and to evaluate the effects of the different methods on estimated avian fatality at a wind power facility in California. A quick glance at Table 3 that succinctly summarizes their results and provides estimated fatality rates and 80% confidence intervals calculated using the 2 methods reveals a surprising result. The confidence intervals of all of their estimates and most of the conventional estimates extend below 0. These results imply that wind turbines may have the capacity to create live birds. But a more likely interpretation is that a serious error occurred in the calculation of either the average fatality rate or its standard error or both. Further evaluation of their methods reveals that the scientific basis for concluding that “many estimates of scavenger removal rates prior to [their] study were likely biased low due to scavenger swamping” and “previously reported estimates of avian fatality rates … should be adjusted upwards” was not evident in their analysis and results. Their comparison to conventional approaches was not applicable, their statistical models were questionable, and the conclusions they drew were unsupported.
Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.
Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald
2014-11-25
In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.
Estimation of glycaemic control in the past month using ratio of glycated albumin to HbA1c.
Musha, I; Mochizuki, M; Kikuchi, T; Akatsuka, J; Ohtake, A; Kobayashi, K; Kikuchi, N; Kawamura, T; Yokota, I; Urakami, T; Sugihara, S; Amemiya, S
2018-04-13
To evaluate comprehensively the use of the glycated albumin to HbA 1c ratio for estimation of glycaemic control in the previous month. A total of 306 children with Type 1 diabetes mellitus underwent ≥10 simultaneous measurements of glycated albumin and HbA 1c . Correlation and concordance rates were examined between HbA 1c measurements taken 1 month apart (ΔHbA 1c ) and glycated albumin/HbA 1c ratio fluctuations were calculated as Z-scores from the cohort value at enrolment of this study cohort (method A) or the percent difference from the individual mean over time (method B). Fluctuations in glycated albumin/HbA 1c ratio (using both methods) were weakly but significantly correlated with ΔHbA 1c , whereas concordance rates were significant for glycaemic deterioration but not for glycaemic improvement. Concordance rates were higher using method B than method A. The glycated albumin/HbA 1c ratio was able to estimate glycaemic deterioration in the previous month, while estimation of glycaemic improvement in the preceding month was limited. Because method B provided a better estimate of recent glycaemic control than method A, the individual mean of several measurements of the glycated albumin/HbA 1c ratio over time may also identify individuals with high or low haemoglobin glycation phenotypes in a given population, such as Japanese children with Type 1 diabetes, thereby allowing more effective diabetes management. © 2018 Diabetes UK.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
NASA Astrophysics Data System (ADS)
Jin, Honglin; Kato, Teruyuki; Hori, Muneo
2007-07-01
An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.
Xing, Jian; Burkom, Howard; Moniz, Linda; Edgerton, James; Leuze, Michael; Tokars, Jerome
2009-01-01
Background The Centers for Disease Control and Prevention's (CDC's) BioSense system provides near-real time situational awareness for public health monitoring through analysis of electronic health data. Determination of anomalous spatial and temporal disease clusters is a crucial part of the daily disease monitoring task. Our study focused on finding useful anomalies at manageable alert rates according to available BioSense data history. Methods The study dataset included more than 3 years of daily counts of military outpatient clinic visits for respiratory and rash syndrome groupings. We applied four spatial estimation methods in implementations of space-time scan statistics cross-checked in Matlab and C. We compared the utility of these methods according to the resultant background cluster rate (a false alarm surrogate) and sensitivity to injected cluster signals. The comparison runs used a spatial resolution based on the facility zip code in the patient record and a finer resolution based on the residence zip code. Results Simple estimation methods that account for day-of-week (DOW) data patterns yielded a clear advantage both in background cluster rate and in signal sensitivity. A 28-day baseline gave the most robust results for this estimation; the preferred baseline is long enough to remove daily fluctuations but short enough to reflect recent disease trends and data representation. Background cluster rates were lower for the rash syndrome counts than for the respiratory counts, likely because of seasonality and the large scale of the respiratory counts. Conclusion The spatial estimation method should be chosen according to characteristics of the selected data streams. In this dataset with strong day-of-week effects, the overall best detection performance was achieved using subregion averages over a 28-day baseline stratified by weekday or weekend/holiday behavior. Changing the estimation method for particular scenarios involving different spatial resolution or other syndromes can yield further improvement. PMID:19615075
Davis, Gregory B; Laslett, Dean; Patterson, Bradley M; Johnston, Colin D
2013-03-15
Accurate estimation of biodegradation rates during remediation of petroleum impacted soil and groundwater is critical to avoid excessive costs and to ensure remedial effectiveness. Oxygen depth profiles or oxygen consumption over time are often used separately to estimate the magnitude and timeframe for biodegradation of petroleum hydrocarbons in soil and subsurface environments. Each method has limitations. Here we integrate spatial and temporal oxygen concentration data from a field experiment to develop better estimates and more reliably quantify biodegradation rates. During a nine-month bioremediation trial, 84 sets of respiration rate data (where aeration was halted and oxygen consumption was measured over time) were collected from in situ oxygen sensors at multiple locations and depths across a diesel non-aqueous phase liquid (NAPL) contaminated subsurface. Additionally, detailed vertical soil moisture (air-filled porosity) and NAPL content profiles were determined. The spatial and temporal oxygen concentration (respiration) data were modeled assuming one-dimensional diffusion of oxygen through the soil profile which was open to the atmosphere. Point and vertically averaged biodegradation rates were determined, and compared to modeled data from a previous field trial. Point estimates of biodegradation rates assuming no diffusion ranged up to 58 mg kg(-1) day(-1) while rates accounting for diffusion ranged up to 87 mg kg(-1) day(-1). Typically, accounting for diffusion increased point biodegradation rate estimates by 15-75% and vertically averaged rates by 60-80% depending on the averaging method adopted. Importantly, ignoring diffusion led to overestimation of biodegradation rates where the location of measurement was outside the zone of NAPL contamination. Over or underestimation of biodegradation rate estimates leads to cost implications for successful remediation of petroleum impacted sites. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
2016-12-01
Simplified example of estimating metabolic rate from core temperature using the SCENARIO thermoregulatory model. 7 4 Edgewood training site, Day 1, core... temperature (TC) and metabolic rate (Ṁ). 10 5 Edgewood training site, Day 2, core temperature (TC) and metabolic rate (Ṁ). 11 6 Hayward...training site, Day 1, core temperature (TC) and metabolic rate (Ṁ). 12 7 Hayward training site, Day 2, core temperature (TC) and metabolic rate (Ṁ). 13
Yanagawa, T; Tokudome, S
1990-01-01
We developed methods to assess the cancer risks by screening tests. These methods estimate the size of the high risk group adjusted for the characteristics of screening tests and estimate the incidence rates of cancer among the high risk group adjusted for the characteristics of the tests. A method was also developed for selecting the cut-off point of a screening test. Finally, the methods were applied to estimate the risk of the adult T-cell leukemia/lymphoma. PMID:2269244
Appendix H of KABAM Version 1.0 documentation related to estimating the metabolism rate constant. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.
Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang
2018-04-05
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.
Winterdahl, Michael; Sørensen, Michael; Keiding, Susanne; Mortensen, Frank V.; Alstrup, Aage K. O.; Hansen, Søren B.; Munk, Ole L.
2012-01-01
Objective To determine whether dynamic contrast-enhanced computed tomography (DCE-CT) and the slope method can provide absolute measures of hepatic blood perfusion from hepatic artery (HA) and portal vein (PV) at experimentally varied blood flow rates. Materials and Methods Ten anesthetized 40-kg pigs underwent DCE-CT during periods of normocapnia (normal flow), hypocapnia (decreased flow), and hypercapnia (increased flow), which was induced by adjusting the ventilation. Reference blood flows in HA and PV were measured continuously by surgically-placed ultrasound transit-time flowmeters. For each capnic condition, the DCE-CT estimated absolute hepatic blood perfusion from HA and PV were calculated using the slope method and compared with flowmeter based absolute measurements of hepatic perfusions and relative errors were analyzed. Results The relative errors (mean±SEM) of the DCE-CT based perfusion estimates were −21±23% for HA and 81±31% for PV (normocapnia), 9±23% for HA and 92±42% for PV (hypocapnia), and 64±28% for HA and −2±20% for PV (hypercapnia). The mean relative errors for HA were not significantly different from zero during hypo- and normocapnia, and the DCE-CT slope method could detect relative changes in HA perfusion between scans. Infusion of contrast agent led to significantly increased hepatic blood perfusion, which biased the PV perfusion estimates. Conclusions Using the DCE-CT slope method, HA perfusion estimates were accurate at low and normal flow rates whereas PV perfusion estimates were inaccurate and imprecise. At high flow rate, both HA perfusion estimates were significantly biased. PMID:22836307
Non-contact estimation of heart rate and oxygen saturation using ambient light.
Bal, Ufuk
2015-01-01
We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings.
Non-contact estimation of heart rate and oxygen saturation using ambient light
Bal, Ufuk
2014-01-01
We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings PMID:25657877
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
Cros, David; Sánchez, Leopoldo; Cochard, Benoit; Samper, Patrick; Denis, Marie; Bouvet, Jean-Marc; Fernández, Jesús
2014-04-01
Explicit pedigree reconstruction by simulated annealing gave reliable estimates of genealogical coancestry in plant species, especially when selfing rate was lower than 0.6, using a realistic number of markers. Genealogical coancestry information is crucial in plant breeding to estimate genetic parameters and breeding values. The approach of Fernández and Toro (Mol Ecol 15:1657-1667, 2006) to estimate genealogical coancestries from molecular data through pedigree reconstruction was limited to species with separate sexes. In this study it was extended to plants, allowing hermaphroditism and monoecy, with possible selfing. Moreover, some improvements were made to take previous knowledge on the population demographic history into account. The new method was validated using simulated and real datasets. Simulations showed that accuracy of estimates was high with 30 microsatellites, with the best results obtained for selfing rates below 0.6. In these conditions, the root mean square error (RMSE) between the true and estimated genealogical coancestry was small (<0.07), although the number of ancestors was overestimated and the selfing rate could be biased. Simulations also showed that linkage disequilibrium between markers and departure from the Hardy-Weinberg equilibrium in the founder population did not affect the efficiency of the method. Real oil palm data confirmed the simulation results, with a high correlation between the true and estimated genealogical coancestry (>0.9) and a low RMSE (<0.08) using 38 markers. The method was applied to the Deli oil palm population for which pedigree data were scarce. The estimated genealogical coancestries were highly correlated (>0.9) with the molecular coancestries using 100 markers. Reconstructed pedigrees were used to estimate effective population sizes. In conclusion, this method gave reliable genealogical coancestry estimates. The strategy was implemented in the software MOLCOANC 3.0.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Estimating evapotranspiration in natural and constructed wetlands
Lott, R. Brandon; Hunt, Randall J.
2001-01-01
Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.
Chesson, Harrell W; Ludovic, Jennifer A; Berruti, Andrés A; Gift, Thomas L
2018-01-01
The purpose of this article was to describe methods that sexually transmitted disease (STD) programs can use to estimate the potential effects of changes in their budgets in terms of disease burden and direct medical costs. We proposed 2 distinct approaches to estimate the potential effect of changes in funding on subsequent STD burden, one based on an analysis of state-level STD prevention funding and gonorrhea case rates and one based on analyses of the effect of Disease Intervention Specialist (DIS) activities on gonorrhea case rates. We also illustrated how programs can estimate the impact of budget changes on intermediate outcomes, such as partner services. Finally, we provided an example of the application of these methods for a hypothetical state STD prevention program. The methods we proposed can provide general approximations of how a change in STD prevention funding might affect the level of STD prevention services provided, STD incidence rates, and the direct medical cost burden of STDs. In applying these methods to a hypothetical state, a reduction in annual funding of US $200,000 was estimated to lead to subsequent increases in STDs of 1.6% to 3.6%. Over 10 years, the reduction in funding totaled US $2.0 million, whereas the cumulative, additional direct medical costs of the increase in STDs totaled US $3.7 to US $8.4 million. The methods we proposed, though subject to important limitations, can allow STD prevention personnel to calculate evidence-based estimates of the effects of changes in their budget.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Estimates of population change in selected species of tropical birds using mark-recapture data
Brawn, J.; Nichols, J.D.; Hines, J.E.; Nesbitt, J.
2000-01-01
The population biology of tropical birds is known for a only small sample of species; especially in the Neotropics. Robust estimates of parameters such as survival rate and finite rate of population change (A) are crucial for conservation purposes and useful for studies of avian life histories. We used methods developed by Pradel (1996, Biometrics 52:703-709) to estimate A for 10 species of tropical forest lowland birds using data from a long-term (> 20 yr) banding study in Panama. These species constitute a ecologically and phylogenetically diverse sample. We present these estimates and explore if they are consistent with what we know from selected studies of banded birds and from 5 yr of estimating nesting success (i.e., an important component of A). A major goal of these analyses is to assess if the mark-recapture methods generate reliable and reasonably precise estimates of population change than traditional methods that require more sampling effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyldesley, Scott, E-mail: styldesl@bccancer.bc.c; Delaney, Geoff; Foroudi, Farshad
Purpose: Estimates of the need for radiotherapy (RT) using different methods (criterion based benchmarking [CBB]and the Canadian [C-EBEST]and Australian [A-EBEST]epidemiologically based estimates) exist for various cancer sites. We compared these model estimates to actual RT rates for lung, breast, and prostate cancers in British Columbia (BC). Methods and Materials: All cases of lung, breast, and prostate cancers in BC from 1997 to 2004 and all patients receiving RT within 1 year (RT{sub 1Y}) and within 5 years (RT{sub 5Y}) of diagnosis were identified. The RT{sub 1Y} and RT{sub 5Y} proportions in health regions with a cancer center for the mostmore » recent year were then calculated. RT rates were compared with CBB and EBEST estimates of RT needs. Variation was assessed by time and region. Results: The RT{sub 1Y} in regions with a cancer center for lung, breast, and prostate cancers were 51%, 58%, and 33% compared with 45%, 57%, and 32% for C-EBEST and 41%, 61%, and 37% for CBB models. The RT{sub 5Y} rates in regions with a cancer center for lung, breast, and prostate cancers were 59%, 61%, and 40% compared with 61%, 66%, and 61% for C-EBEST and 75%, 83%, and 60% for A-EBEST models. The RT{sub 1Y} rates increased for breast and prostate cancers. Conclusions: C-EBEST and CBB model estimates are closer to the actual RT rates than the A-EBEST estimates. Application of these model estimates by health care decision makers should be undertaken with an understanding of the methods used and the assumptions on which they were based.« less
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
The use of earthquake rate changes as a stress meter at Kilauea volcano.
Dieterich, J; Cayol, V; Okubo, P
2000-11-23
Stress changes in the Earth's crust are generally estimated from model calculations that use near-surface deformation as an observational constraint. But the widespread correlation of changes of earthquake activity with stress has led to suggestions that stress changes might be calculated from earthquake occurrence rates obtained from seismicity catalogues. Although this possibility has considerable appeal, because seismicity data are routinely collected and have good spatial and temporal resolution, the method has not yet proven successful, owing to the non-linearity of earthquake rate changes with respect to both stress and time. Here, however, we present two methods for inverting earthquake rate data to infer stress changes, using a formulation for the stress- and time-dependence of earthquake rates. Application of these methods at Kilauea volcano, in Hawaii, yields good agreement with independent estimates, indicating that earthquake rates can provide a practical remote-sensing stress meter.
USDA-ARS?s Scientific Manuscript database
Rill detachment is an important process in rill erosion. The rill detachment rate is the fundamental basis for determination of the parameters of a rill erosion model. In this paper, an analytical method was proposed to estimate the rill detachment rate. The method is based on the exact analytical s...
Controls on gas transfer velocities in a large river
The emission of biogenic gases from large rivers can be an important component of regional greenhouse gas budgets. However, emission rate estimates are often poorly constrained due to uncertainties in the air-water gas exchange rate. We used the floating chamber method to estim...
Shaila, Mulki; Pai, G Prakash; Shetty, Pushparaj
2013-01-01
To evaluate the salivary protein concentration in gingivitis and periodontitis patients and compare the parameters like salivary total protein, salivary albumin, salivary flow rate, pH, buffer capacity and flow rate in both young and elderly patients with simple methods. One hundred and twenty subjects were grouped based on their age as young and elderly. Each group was subgrouped (20 subjects) as controls, gingivitis and periodontitis. Unstimulated whole saliva was collected from patients and flow rate was noted down during collection of the sample. Salivary protein estimation was done using the Biuret method and salivary albumin was assessed using the Bromocresol green method. pH was estimated with a pHmeter and buffering capacity was analyzed with the titration method. Student's t-test, Fisher's test (ANOVA) and Tukey HSD (ANOVA) tests were used for statistical analysis. A very highly significant rise in the salivary total protein and albumin concentration was noted in gingivitis and periodontitis subjects of both young and elderly. An overall decrease in salivary flow rate was observed among the elderly, and also the salivary flow rate of women was significantly lower than that of men. Significant associations between salivary total protein and albumin in gingivitis and periodontitis were found with simple biochemical tests. A decrease in salivary flow rate among elderly and among women was noted.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
Detilleux, J
2017-06-08
In most infectious diseases, among which bovine mastitis, promptness of the recruitment of inflammatory cells (mainly neutrophils) in inflamed tissues has been shown to be of prime importance in the resolution of the infection. Although this information should aid in designing efficient control strategies, it has never been quantified in field studies. Here, a system of ordinary differential equations is proposed that describes the dynamic process of the inflammatory response to mammary pathogens. The system was tested, by principal differential analysis, on 1947 test-day somatic cell counts collected on 756 infected cows, from 50 days before to 50 days after the diagnosis of clinical mastitis. Cell counts were log-transformed before estimating recruitment rates. Daily rates of cellular recruitment was estimated at 0.052 (st. err. = 0.005) during health. During disease, an additional cellular rate of recruitment was estimated at 0.004 (st. err. = 0.001) per day and per bacteria. These estimates are in agreement with analogous measurements of in vitro neutrophil functions. Results suggest the method is adequate to estimate one of the components of innate resistance to mammary pathogens at the individual level and in field studies. Extension of the method to estimate components of innate tolerance and limits of the study are discussed.
Lopes, J S; Arenas, M; Posada, D; Beaumont, M A
2014-03-01
The estimation of parameters in molecular evolution may be biased when some processes are not considered. For example, the estimation of selection at the molecular level using codon-substitution models can have an upward bias when recombination is ignored. Here we address the joint estimation of recombination, molecular adaptation and substitution rates from coding sequences using approximate Bayesian computation (ABC). We describe the implementation of a regression-based strategy for choosing subsets of summary statistics for coding data, and show that this approach can accurately infer recombination allowing for intracodon recombination breakpoints, molecular adaptation and codon substitution rates. We demonstrate that our ABC approach can outperform other analytical methods under a variety of evolutionary scenarios. We also show that although the choice of the codon-substitution model is important, our inferences are robust to a moderate degree of model misspecification. In addition, we demonstrate that our approach can accurately choose the evolutionary model that best fits the data, providing an alternative for when the use of full-likelihood methods is impracticable. Finally, we applied our ABC method to co-estimate recombination, substitution and molecular adaptation rates from 24 published human immunodeficiency virus 1 coding data sets.
Estimation of the Dose and Dose Rate Effectiveness Factor
NASA Technical Reports Server (NTRS)
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Comparing two survey methods for estimating maternal and perinatal mortality in rural Cambodia.
Chandy, Hoeuy; Heng, Yang Van; Samol, Ha; Husum, Hans
2008-03-01
We need solid estimates of maternal mortality rates (MMR) to monitor the impact of maternal care programs. Cambodian health authorities and WHO report the MMR in Cambodia at 450 per 100,000 live births. The figure is drawn from surveys where information is obtained by interviewing respondents about the survival of all their adult sisters (sisterhood method). The estimate is statistically imprecise, 95% confidence intervals ranging from 260 to 620/100,000. The MMR estimate is also uncertain due to under-reporting; where 80-90% of women deliver at home maternal fatalities may go undetected especially where mortality is highest, in remote rural areas. The aim of this study was to attain more reliable MMR estimates by using survey methods other than the sisterhood method prior to an intervention targeting obstetric rural emergencies. The study was carried out in rural Northwestern Cambodia where access to health services is poor and poverty, endemic diseases, and land mines are endemic. Two survey methods were applied in two separate sectors: a community-based survey gathering data from public sources and a household survey gathering data direct from primary sources. There was no statistically significant difference between the two survey results for maternal deaths, both types of survey reported mortality rates around the public figure. The household survey reported a significantly higher perinatal mortality rate as compared to the community-based survey, 8.6% versus 5.0%. Also the household survey gave qualitative data important for a better understanding of the many problems faced by mothers giving birth in the remote villages. There are detection failures in both surveys; the failure rate may be as high as 30-40%. PRINCIPLE CONCLUSION: Both survey methods are inaccurate, therefore inappropriate for evaluation of short-term changes of mortality rates. Surveys based on primary informants yield qualitative information about mothers' hardships important for the design of future maternal care interventions.
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
Estimating watershed level nonagricultural pesticide use from golf courses using geospatial methods
Fox, G.A.; Thelin, G.P.; Sabbagh, G.J.; Fuchs, J.W.; Kelly, I.D.
2008-01-01
Limited information exists on pesticide use for nonagricultural purposes, making it difficult to estimate pesticide loadings from nonagricultural sources to surface water and to conduct environmental risk assessments. A method was developed to estimate the amount of pesticide use on recreational turf grasses, specifically golf course turf grasses, for watersheds located throughout the conterminous United States (U.S.). The approach estimates pesticide use: (1) based on the area of recreational turf grasses (used as a surrogate for turf associated with golf courses) within the watershed, which was derived from maps of land cover, and (2) from data on the location and average treatable area of golf courses. The area of golf course turf grasses determined from these two methods was used to calculate the percentage of each watershed planted in golf course turf grass (percent crop area, or PCA). Turf-grass PCAs derived from the two methods were used with recommended application rates provided on pesticide labels to estimate total pesticide use on recreational turf within 1,606 watersheds associated with surface-water sources of drinking water. These pesticide use estimates made from label rates and PCAs were compared to use estimates from industry sales data on the amount of each pesticide sold for use within the watershed. The PCAs derived from the land-cover data had an average value of 0.4% of a watershed with minimum of 0.01% and a maximum of 9.8%, whereas the PCA values that are based on the number of golf courses in a watershed had an average of 0.3% of a watershed with a minimum of <0.01% and a maximum of 14.2%. Both the land-cover method and the number of golf courses method produced similar PCA distributions, suggesting that either technique may be used to provide a PCA estimate for recreational turf. The average and maximum PCAs generally correlated to watershed size, with the highest PCAs estimated for small watersheds. Using watershed specific PCAs, combined with label rates, resulted in greater than two orders of magnitude over-estimation of the pesticide use compared to estimates from sales data. ?? 2008 American Water Resources Association.
Schoonover, Jon E; Crim, Jackie F; Williard, Karl W J; Groninger, John W; Zaczek, James J; Pattumma, Klairoong
2015-09-01
Sedimentation dynamics were assessed in sinkholes within training areas at Ft. Knox Military Installation, a karst landscape subjected to decades of tracked vehicle use and extreme soil disturbance. Sinkholes sampled were sediment-laden and behaved as intermittent ponds. Dendrogeomorphic analyses were conducted using willow trees (Salix spp.) located around the edge of 18 sinkholes to estimate historical sedimentation rates, and buried bottles were installed in 20 sinkholes at the center, outer edge, and at the midpoint between the center and edge to estimate annual sedimentation rates. Sedimentation data were coupled with vegetation characteristics of sinkhole buffers to determine relationships among these variables. The dendrogeomorphic method estimated an average accumulation rate of 1.27 cm year(-1) translating to a sediment loss rate of 46.1 metric ton year(-1) from the training areas. However, sediment export to sinkholes was estimated to be much greater (118.6 metric ton year(-1)) via the bottle method. These data suggest that the latter method provided a more accurate estimate since accumulation was greater in the center of sinkholes compared to the periphery where dendrogeomorphic data were collected. Vegetation data were not tightly correlated with sedimentation rates, suggesting that further research is needed to identify a viable proxy for direct measures of sediment accumulation in this extreme deposition environment. Mitigation activities for the sinkholes at Ft. Knox's tank training area, and other heavily disturbed karst environments where extreme sedimentation exists, should consider focusing on flow path and splay area management.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Estimating site occupancy rates when detection probabilities are less than one
MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.
2002-01-01
Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.
Toni Antikainen; Anti Rohumaa; Christopher G. Hunt; Mari Levirinne; Mark Hughes
2015-01-01
In plywood production, human operators find it difficult to precisely monitor the spread rate of adhesive in real-time. In this study, macroscopic fluorescence was used to estimate spread rate (SR) of urea formaldehyde adhesive on birch (Betula pendula Roth) veneer. This method could be an option when developing automated real-time SR measurement for...
Rating curve uncertainty: A comparison of estimation methods
Mason, Jr., Robert R.; Kiang, Julie E.; Cohn, Timothy A.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The USGS is engaged in both internal development and collaborative efforts to evaluate existing methods for characterizing the uncertainty of streamflow measurements (gaugings), stage-discharge relations (ratings), and, ultimately, the streamflow records derived from them. This paper provides a brief overview of two candidate methods that may be used to characterize the uncertainty of ratings, and illustrates the results of their application to the ratings of the two USGS streamgages.
Treberg, Jason R; Killen, Shaun S; MacCormack, Tyson J; Lamarre, Simon G; Enders, Eva C
2016-12-01
Metabolic costs are central to individual energy budgets, making estimates of metabolic rate vital to understanding how an organism interacts with its environment as well as the role of species in their ecosystem. Despite the ecological and commercial importance of fishes, there are currently no widely adopted means of measuring field metabolic rate in fishes. The lack of recognized methods is in part due to the logistical difficulties of measuring metabolic rates in free swimming fishes. However, further development and refinement of techniques applicable for field-based studies on free swimming animals would greatly enhance the capacity to study fish under environmentally relevant conditions. In an effort to foster discussion in this area, from field ecologists to biochemists alike, we review aspects of energy metabolism and give details on approaches that have been used to estimate energetic parameters in fishes. In some cases, the techniques have been applied to field conditions; while in others, the methods have been primarily used on laboratory held fishes but should be applicable, with validation, to fishes in their natural environment. Limitations, experimental considerations and caveats of these measurements and the study of metabolism in wild fishes in general are also discussed. Potential novel approaches to FMR estimates are also presented for consideration. The innovation of methods for measuring field metabolic rate in free-ranging wild fish would revolutionize the study of physiological ecology. Copyright © 2016 Elsevier Inc. All rights reserved.
Analysing malaria drug trials on a per-individual or per-clone basis: a comparison of methods.
Jaki, Thomas; Parry, Alice; Winter, Katherine; Hastings, Ian
2013-07-30
There are a variety of methods used to estimate the effectiveness of antimalarial drugs in clinical trials, invariably on a per-person basis. A person, however, may have more than one malaria infection present at the time of treatment. We evaluate currently used methods for analysing malaria trials on a per-individual basis and introduce a novel method to estimate the cure rate on a per-infection (clone) basis. We used simulated and real data to highlight the differences of the various methods. We give special attention to classifying outcomes as cured, recrudescent (infections that never fully cleared) or ambiguous on the basis of genetic markers at three loci. To estimate cure rates on a per-clone basis, we used the genetic information within an individual before treatment to determine the number of clones present. We used the genetic information obtained at the time of treatment failure to classify clones as recrudescence or new infections. On the per-individual level, we find that the most accurate methods of classification label an individual as newly infected if all alleles are different at the beginning and at the time of failure and as a recrudescence if all or some alleles were the same. The most appropriate analysis method is survival analysis or alternatively for complete data/per-protocol analysis a proportion estimate that treats new infections as successes. We show that the analysis of drug effectiveness on a per-clone basis estimates the cure rate accurately and allows more detailed evaluation of the performance of the treatment. Copyright © 2012 John Wiley & Sons, Ltd.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
Several key issues on using 137Cs method for soil erosion estimation
USDA-ARS?s Scientific Manuscript database
This work was to examine several key issues of using the cesium-137 method to estimate soil erosion rates in order to improve and standardize the method. Based on the comprehensive review and synthesis of a large body of published literature and the author’s extensive research experience, several k...
NASA Astrophysics Data System (ADS)
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
VET Program Completion Rates: An Evaluation of the Current Method. Occasional Paper
ERIC Educational Resources Information Center
National Centre for Vocational Education Research (NCVER), 2016
2016-01-01
This work asks one simple question: "how reliable is the method used by the National Centre for Vocational Education Research (NCVER) to estimate projected rates of VET program completion?" In other words, how well do early projections align with actual completion rates some years later? Completion rates are simple to calculate with a…
Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.
Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430
Small area estimation for estimating the number of infant mortality in West Java, Indonesia
NASA Astrophysics Data System (ADS)
Anggreyani, Arie; Indahwati, Kurnia, Anang
2016-02-01
Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.
Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.
1996-01-01
We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea
Detecting aseismic strain transients from seismicity data
Llenos, A.L.; McGuire, J.J.
2011-01-01
Aseismic deformation transients such as fluid flow, magma migration, and slow slip can trigger changes in seismicity rate. We present a method that can detect these seismicity rate variations and utilize these anomalies to constrain the underlying variations in stressing rate. Because ordinary aftershock sequences often obscure changes in the background seismicity caused by aseismic processes, we combine the stochastic Epidemic Type Aftershock Sequence model that describes aftershock sequences well and the physically based rate- and state-dependent friction seismicity model into a single seismicity rate model that models both aftershock activity and changes in background seismicity rate. We implement this model into a data assimilation algorithm that inverts seismicity catalogs to estimate space-time variations in stressing rate. We evaluate the method using a synthetic catalog, and then apply it to a catalog of M???1.5 events that occurred in the Salton Trough from 1990 to 2009. We validate our stressing rate estimates by comparing them to estimates from a geodetically derived slip model for a large creep event on the Obsidian Buttes fault. The results demonstrate that our approach can identify large aseismic deformation transients in a multidecade long earthquake catalog and roughly constrain the absolute magnitude of the stressing rate transients. Our method can therefore provide a way to detect aseismic transients in regions where geodetic resolution in space or time is poor. Copyright 2011 by the American Geophysical Union.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Estimation of Stormwater Interception Rate for various LID Facilities
NASA Astrophysics Data System (ADS)
Kim, S.; Lee, O.; Choi, J.
2017-12-01
In this study, the stormwater interception rate is proposed to apply in the design of LID facilities. For this purpose, EPA-SWMM is built with some areas of Noksan National Industrial Complex where long-term observed stormwater data were monitored and stormwater interception rates for various design capacities of various LID facilities are estimated. While the sensitivity of stormwater interception rate according to design specifications of bio-retention and infiltration trench facilities is not large, the sensitivity of stormwater interception rate according to local rainfall characteristics is relatively big. As a result of comparing the present rainfall interception rate estimation method which is officially operated in Korea with the one proposed in this study, it will be presented that the present method is highly likely to overestimate the performance of the bio-retention and infiltration trench facilities. Finally, a new stormwater interception rate formulas for the bio-retention and infiltration trench LID facilities will be proposed. Acknowledgement This research was supported by a grant (2016000200002) from Public Welfare Technology Development Program funded by Ministry of Environment of Korean government.
Precision and recall estimates for two-hybrid screens
Huang, Hailiang; Bader, Joel S.
2009-01-01
Motivation: Yeast two-hybrid screens are an important method to map pairwise protein interactions. This method can generate spurious interactions (false discoveries), and true interactions can be missed (false negatives). Previously, we reported a capture–recapture estimator for bait-specific precision and recall. Here, we present an improved method that better accounts for heterogeneity in bait-specific error rates. Result: For yeast, worm and fly screens, we estimate the overall false discovery rates (FDRs) to be 9.9%, 13.2% and 17.0% and the false negative rates (FNRs) to be 51%, 42% and 28%. Bait-specific FDRs and the estimated protein degrees are then used to identify protein categories that yield more (or fewer) false positive interactions and more (or fewer) interaction partners. While membrane proteins have been suggested to have elevated FDRs, the current analysis suggests that intrinsic membrane proteins may actually have reduced FDRs. Hydrophobicity is positively correlated with decreased error rates and fewer interaction partners. These methods will be useful for future two-hybrid screens, which could use ultra-high-throughput sequencing for deeper sampling of interacting bait–prey pairs. Availability: All software (C source) and datasets are available as supplemental files and at http://www.baderzone.org under the Lesser GPL v. 3 license. Contact: joel.bader@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19091773
Geng, Xiaobing; Xie, Zhenghui; Zhang, Lijun; Xu, Mei; Jia, Binghao
2018-03-01
An inverse source estimation method is proposed to reconstruct emission rates using local air concentration sampling data. It involves the nonlinear least squares-based ensemble four-dimensional variational data assimilation (NLS-4DVar) algorithm and a transfer coefficient matrix (TCM) created using FLEXPART, a Lagrangian atmospheric dispersion model. The method was tested by twin experiments and experiments with actual Cs-137 concentrations measured around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Emission rates can be reconstructed sequentially with the progression of a nuclear accident, which is important in the response to a nuclear emergency. With pseudo observations generated continuously, most of the emission rates were estimated accurately, except under conditions when the wind blew off land toward the sea and at extremely slow wind speeds near the FDNPP. Because of the long duration of accidents and variability in meteorological fields, monitoring networks composed of land stations only in a local area are unable to provide enough information to support an emergency response. The errors in the estimation compared to the real observations from the FDNPP nuclear accident stemmed from a shortage of observations, lack of data control, and an inadequate atmospheric dispersion model without improvement and appropriate meteorological data. The proposed method should be developed further to meet the requirements of a nuclear emergency response. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rainfall Measurement with a Ground Based Dual Frequency Radar
NASA Technical Reports Server (NTRS)
Takahashi, Nobuhiro; Horie, Hiroaki; Meneghini, Robert
1997-01-01
Dual frequency methods are one of the most useful ways to estimate precise rainfall rates. However, there are some difficulties in applying this method to ground based radars because of the existence of a blind zone and possible error in the radar calibration. Because of these problems, supplemental observations such as rain gauges or satellite link estimates of path integrated attenuation (PIA) are needed. This study shows how to estimate rainfall rate with a ground based dual frequency radar with rain gauge and satellite link data. Applications of this method to stratiform rainfall is also shown. This method is compared with single wavelength method. Data were obtained from a dual frequency (10 GHz and 35 GHz) multiparameter radar radiometer built by the Communications Research Laboratory (CRL), Japan, and located at NASA/GSFC during the spring of 1997. Optical rain gauge (ORG) data and broadcasting satellite signal data near the radar t location were also utilized for the calculation.
NASA Astrophysics Data System (ADS)
Sicot, G.; Lennon, M.; Miegebielle, V.; Dubucq, D.
2015-08-01
The thickness and the emulsion rate of an oil spill are two key parameters allowing to design a tailored response to an oil discharge. If estimated on per pixel basis at a high spatial resolution, the estimation of the oil thickness allows the volume of pollutant to be estimated, and that volume is needed in order to evaluate the magnitude of the pollution, and to determine the most adapted recovering means to use. The estimation of the spatial distribution of the thicknesses also allows the guidance of the recovering means at sea. The emulsion rate can guide the strategy to adopt in order to deal with an offshore oil spill: efficiency of dispersants is for example not identical on a pure oil or on an emulsion. Moreover, the thickness and emulsion rate allow the amount of the oil that has been discharged to be estimated. It appears that the shape of the reflectance spectrum of oil in the SWIR range (1000-2500nm) varies according to the emulsion rate and to the layer thickness. That shape still varies when the oil layer reaches a few millimetres, which is not the case in the visible range (400-700nm), where the spectral variation saturates around 200 μm (the upper limit of the Bonn agreement oil appearance code). In that context, hyperspectral imagery in the SWIR range shows a high potential to describe and characterize oil spills. Previous methods which intend to estimate those two parameters are based on the use of a spectral library. In that paper, we will present a method based on the inversion of a simple radiative transfer model in the oil layer. We will show that the proposed method is robust against another parameter that affects the reflectance spectrum: the size of water droplets in the emulsion. The method shows relevant results using measurements made in laboratory, equivalent to the ones obtained using methods based on the use of a spectral library. The method has the advantage to release the need of a spectral library, and to provide maps of thickness and emulsion rate values per pixel. The maps obtained are not composed of regions of thickness ranges, such as the ones obtained using discretized levels of measurements in the spectral library, or maps made from visual observations following the Bonn agreement oil appearance code.
The estimation of galactic cosmic ray penetration and dose rates
NASA Technical Reports Server (NTRS)
Burrell, M. O.; Wright, J. J.
1972-01-01
This study is concerned with approximation methods that can be readily applied to estimate the absorbed dose rate from cosmic rays in rads - tissue or rems inside simple geometries of aluminum. The present work is limited to finding the dose rate at the center of spherical shells or behind plane slabs. The dose rate is calculated at tissue-point detectors or for thin layers of tissue. This study considers cosmic-rays dose rates for both free-space and earth-orbiting missions.
[Proposed method to estimate underreporting of induced abortion in Spain].
Rodríguez Blas, C; Sendra Gutiérrez, J M; Regidor Poyatos, E; Gutiérrez Fisac, J L; Iñigo Martínez, J
1994-01-01
In Spain, from 1987 to 1990 the rate of legal abortion reported to the health authorities has doubled; nevertheless, the observed geographical differences suggest to an underreporting of the number of voluntary pregnancy terminations. Based on information on several sociodemographic, economic and cultural characteristics, contraceptive use, availability of abortion services, fertility indices, and maternal and child health status, five homogenEous groups of autonomous region were identified applying factor and cluster analysis techniques. To estimate the level of underreporting, we assumed that all the regions which shape a cluster ought to have the same abortion rate that the region with the highest rate in each group. We estimate that about 18,463 abortions (33.2%) were not reported during 1990. The proposed method can be used for assessing the notification since it allows to identify geographical areas where very similar rates of legal abortion are expected.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
England, M L; Broderick, G A; Shaver, R D; Combs, D K
1997-11-01
Ruminally undegraded protein (RUP) values of blood meal (n = 2), hydrolyzed feather meal (n = 2), fish meal (n = 2), meat and bone meal, and soybean meal were estimated using an in situ method, an inhibitor in vitro method, and an inhibitor in vitro technique applying Michaelis-Menten saturation kinetics. Degradation rates for in situ and inhibitor in vitro methods were calculated by regression of the natural log of the proportion of crude protein (CP) remaining undegraded versus time. Nonlinear regression analysis of the integrated Michaelis-Menten equation was used to determine maximum velocity, the Michaelis constant, and degradation rate (the ratio of maximum velocity to the Michaelis constant). A ruminal passage rate of 0.06/h was assumed in the calculation of RUP. The in situ and inhibitor in vitro techniques yielded similar estimates of ruminal degradation. Mean RUP estimated for soybean meal, blood meal, hydrolyzed feather meal, fish meal, and meat and bone meal were, respectively, 28.6, 86.0, 77.4, 52.9, and 52.6% of CP by the in situ method and 26.4, 86.1, 76.0, 59.6, and 49.5% of CP by the inhibitor in vitro technique. The Michaelis-Menten inhibitor in vitro technique yielded more rapid CP degradation rates and decreased estimates of RUP. The inhibitor in vitro method required less time and labor than did the other two techniques to estimate the RUP values of animal by-product proteins. Results from in vitro incubations with pepsin.HCl suggested that low postruminal digestibility of hydrolyzed feather meal may impair its value as a source of RUP.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
STIR Version 1.0 User's Guide for Pesticide Inhalation Risk
STIR estimates inhalation-type exposure based on pesticide-specific information. It also estimates spray droplet exposure using the application method and rate and then compares these exposure estimates to avian and mammalian toxicity data.
Olariu, Victor; Manesso, Erica; Peterson, Carsten
2017-06-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.
Olariu, Victor; Manesso, Erica
2017-01-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis–Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming. PMID:28680655
Estimation of Blood Flow Rates in Large Microvascular Networks
Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.
2012-01-01
Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness
2015-01-01
Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073
Sanaka, Masaki; Yamamoto, Takatsugu; Ishii, Tarou; Kuyama, Yasushi
2004-01-01
In pharmacokinetics, the Wagner-Nelson (W-N) method can accurately estimate the rate of drug absorption from its urinary elimination rate. A stable isotope (13C) breath test attempts to estimate the rate of absorption of 13C, as an index of gastric emptying rate, from the rate of pulmonary elimination of 13CO2. The time-gastric emptying curve determined by the breath test is quite different from that determined by scintigraphy or ultrasonography. In this report, we have shown that the W-N method can adjust the difference. The W-N equation to estimate gastric emptying from breath data is as follows: the fractional cumulative amount of gastric contents emptied by time t = Abreath (t)/Abreath (infinity) + (1/0.65).d[Abreath (t)/Abreath (infinity) ]/dt, where Abreath (t) = the cumulative recovery of 13CO2 in breath by time t and Abreath ( infinity ) = the ultimate cumulative 13CO2 recovery. The emptying flow curve generated by ultrasonography was compared with that generated by the W-N method-adjusted breath test in 6 volunteers. The emptying curves by the W-N method were almost identical to those by ultrasound. The W-N method can generate an accurate emptying flow curve from 13CO2 data, and it can adjust the difference between ultrasonography and the breath test. Copyright 2004 S. Karger AG, Basel
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Jones, Jeffrey A.
1997-01-01
One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.
NASA Astrophysics Data System (ADS)
Yokoi, Toshiyuki; Itoh, Michimasa; Oguri, Koji
Most of the traffic accidents have been caused by inappropriate driver's mental state. Therefore, driver monitoring is one of the most important challenges to prevent traffic accidents. Some studies for evaluating the driver's mental state while driving have been reported; however driver's mental state should be estimated in real-time in the future. This paper proposes a way to estimate quantitatively driver's mental workload using heart rate variability. It is assumed that the tolerance to driver's mental workload is different depending on the individual. Therefore, we classify people based on their individual tolerance to mental workload. Our estimation method is multiple linear regression analysis, and we compare it to NASA-TLX which is used as the evaluation method of subjective mental workload. As a result, the coefficient of correlation improved from 0.83 to 0.91, and the standard deviation of error also improved. Therefore, our proposed method demonstrated the possibility to estimate mental workload.
Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick
2009-08-01
Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.
An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.
Hightower, Joseph E.; Pollock, Kenneth H.
2013-01-01
Striped bass Morone saxatilis in inland reservoirs play an important role ecologically and in supporting recreational fishing. To manage these populations, biologists need information about abundance and mortality. Abundance estimates can be used to assess the effectiveness of stocking programs that maintain most reservoir striped bass populations. Mortality estimates can indicate the relative impact of fishing versus natural mortality and the need for harvest regulation. The purpose of this chapter is to evaluate tagging studies as a way of obtaining information about abundance and mortality. These approaches can be grouped into three broad categories: tag recapture, tag return, and telemetry. Tag-recapture methods are typically used to estimate population size and other demographic parameters but are often difficult to apply in large systems. A fishing tournament can be an effective way of generating tagging or recapture effort in large systems, compared to using research sampling only. Tag-return methods that rely on angler harvest and catch and release can be used to estimate fishing (F) and natural (M) mortality rates and are a practical approach in large reservoirs. The key to success in tag-return studies is to build in auxiliary studies to estimate short-term tagging mortality, short- and longterm tag loss, reporting rate, and mortality associated with catch and release. F and M can also be estimated using telemetry tags. Advantages of this approach are that angler nonreporting does not bias estimates and fish with transmitters provide useful ecological data. Cost can be a disadvantage of telemetry studies; thus, combining telemetry tags with conventional tag returns in an integrated analysis is often the optimal approach. In summary, tagging methods can be a powerful tool for assessing the effectiveness of inland striped bass stocking programs and the relative impact of fishing versus natural mortality
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
Hatch, Christine E; Fisher, Andrew T.; Revenaugh, Justin S.; Constantz, Jim; Ruehl, Chris
2006-01-01
We present a method for determining streambed seepage rates using time series thermal data. The new method is based on quantifying changes in phase and amplitude of temperature variations between pairs of subsurface sensors. For a reasonable range of streambed thermal properties and sensor spacings the time series method should allow reliable estimation of seepage rates for a range of at least ±10 m d−1 (±1.2 × 10−2 m s−1), with amplitude variations being most sensitive at low flow rates and phase variations retaining sensitivity out to much higher rates. Compared to forward modeling, the new method requires less observational data and less setup and data handling and is faster, particularly when interpreting many long data sets. The time series method is insensitive to streambed scour and sedimentation, which allows for application under a wide range of flow conditions and allows time series estimation of variable streambed hydraulic conductivity. This new approach should facilitate wider use of thermal methods and improve understanding of the complex spatial and temporal dynamics of surface water–groundwater interactions.
Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan
2017-08-07
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆
Tang, Liansheng; Du, Pang; Wu, Chengqing
2012-01-01
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
A comparative study of clock rate and drift estimation
NASA Technical Reports Server (NTRS)
Breakiron, Lee A.
1994-01-01
Five different methods of drift determination and four different methods of rate determination were compared using months of hourly phase and frequency data from a sample of cesium clocks and active hydrogen masers. Linear least squares on frequency is selected as the optimal method of determining both drift and rate, more on the basis of parameter parsimony and confidence measures than on random and systematic errors.
Haji-Maghsoudi, Saiedeh; Haghdoost, Ali-akbar; Rastegari, Azam; Baneshi, Mohammad Reza
2013-01-01
Background: Policy makers need models to be able to detect groups at high risk of HIV infection. Incomplete records and dirty data are frequently seen in national data sets. Presence of missing data challenges the practice of model development. Several studies suggested that performance of imputation methods is acceptable when missing rate is moderate. One of the issues which was of less concern, to be addressed here, is the role of the pattern of missing data. Methods: We used information of 2720 prisoners. Results derived from fitting regression model to whole data were served as gold standard. Missing data were then generated so that 10%, 20% and 50% of data were lost. In scenario 1, we generated missing values, at above rates, in one variable which was significant in gold model (age). In scenario 2, a small proportion of each of independent variable was dropped out. Four imputation methods, under different Event Per Variable (EPV) values, were compared in terms of selection of important variables and parameter estimation. Results: In scenario 2, bias in estimates was low and performances of all methods for handing missing data were similar. All methods at all missing rates were able to detect significance of age. In scenario 1, biases in estimations were increased, in particular at 50% missing rate. Here at EPVs of 10 and 5, imputation methods failed to capture effect of age. Conclusion: In scenario 2, all imputation methods at all missing rates, were able to detect age as being significant. This was not the case in scenario 1. Our results showed that performance of imputation methods depends on the pattern of missing data. PMID:24596839
Nucleation Rate Analysis of Methane Hydrate from Molecular Dynamics Simulations
Yuhara, Daisuke; Barnes, Brian C.; Suh, Donguk; ...
2015-01-06
Clathrate hydrates are solid crystalline structures most commonly formed from solutions that have nucleated to form a mixed solid composed of water and gas. Understanding the mechanism of clathrate hydrate nucleation is essential to grasp the fundamental chemistry of these complex structures and their applications. Molecular dynamics (MD) simulation is an ideal method to study nucleation at the molecular level because the size of the critical nucleus and formation rate occur on the nano scale. Moreover, various analysis methods for nucleation have been developed through MD to analyze nucleation. In particular, the mean first-passage time (MFPT) and survival probability (SP)more » methods have proven to be effective in procuring the nucleation rate and critical nucleus size for monatomic systems. This study assesses the MFPT and SP methods, previously used for monatomic systems, when applied to analyzing clathrate hydrate nucleation. Because clathrate hydrate nucleation is relatively difficult to observe in MD simulations (due to its high free energy barrier), these methods have yet to be applied to clathrate hydrate systems. In this study, we have analyzed the nucleation rate and critical nucleus size of methane hydrate using MFPT and SP methods from data generated by MD simulations at 255 K and 50 MPa. MFPT was modified for clathrate hydrate from the original version by adding the maximum likelihood estimate and growth effect term. The nucleation rates were calculated by MFPT and SP methods and are within 5%; the critical nucleus size estimated by the MFPT method was 50% higher, than values obtained through other more rigorous but computationally expensive estimates. These methods can also be extended to the analysis of other clathrate hydrates.« less
Nucleation Rate Analysis of Methane Hydrate from Molecular Dynamics Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuhara, Daisuke; Barnes, Brian C.; Suh, Donguk
Clathrate hydrates are solid crystalline structures most commonly formed from solutions that have nucleated to form a mixed solid composed of water and gas. Understanding the mechanism of clathrate hydrate nucleation is essential to grasp the fundamental chemistry of these complex structures and their applications. Molecular dynamics (MD) simulation is an ideal method to study nucleation at the molecular level because the size of the critical nucleus and formation rate occur on the nano scale. Moreover, various analysis methods for nucleation have been developed through MD to analyze nucleation. In particular, the mean first-passage time (MFPT) and survival probability (SP)more » methods have proven to be effective in procuring the nucleation rate and critical nucleus size for monatomic systems. This study assesses the MFPT and SP methods, previously used for monatomic systems, when applied to analyzing clathrate hydrate nucleation. Because clathrate hydrate nucleation is relatively difficult to observe in MD simulations (due to its high free energy barrier), these methods have yet to be applied to clathrate hydrate systems. In this study, we have analyzed the nucleation rate and critical nucleus size of methane hydrate using MFPT and SP methods from data generated by MD simulations at 255 K and 50 MPa. MFPT was modified for clathrate hydrate from the original version by adding the maximum likelihood estimate and growth effect term. The nucleation rates were calculated by MFPT and SP methods and are within 5%; the critical nucleus size estimated by the MFPT method was 50% higher, than values obtained through other more rigorous but computationally expensive estimates. These methods can also be extended to the analysis of other clathrate hydrates.« less
NASA Astrophysics Data System (ADS)
Barbetta, Silvia; Moramarco, Tommaso; Perumal, Muthiah
2017-11-01
Quite often the discharge at a site is estimated using the rating curve developed for that site and its development requires river flow measurements, which are costly, tedious and dangerous during severe floods. To circumvent the conventional rating curve development approach, Perumal et al. in 2007 and 2010 applied the Variable Parameter Muskingum Stage-hydrograph (VPMS) routing method for developing stage-discharge relationships especially at those ungauged river sites where stage measurements and details of section geometry are available, but discharge measurements are not made. The VPMS method enables to estimate rating curves at ungauged river sites with acceptable accuracy. But the application of the method is subjected to the limitation of negligible presence of lateral flow within the routing reach. To overcome this limitation, this study proposes an extension of the VPMS method, henceforth, known herein as the VPMS-Lin method, for enabling the streamflow assessment even when significant lateral inflow occurs along the river reach considered for routing. The lateral inflow is estimated through the continuity equation expressed in the characteristic form as advocated by Barbetta et al. in 2012. The VPMS-Lin, is tested on two rivers characterized by different geometric and hydraulic properties: 1) a 50 km reach of the Tiber River in (central Italy) and 2) a 73 km reach of the Godavari River in the peninsular India. The study demonstrates that both the upstream and downstream discharge hydrographs are well reproduced, with a root mean square error equal on average to about 35 and 1700 m3 s-1 for the Tiber River and the Godavari River case studies, respectively. Moreover, simulation studies carried out on a river stretch of the Tiber River using the one-dimensional hydraulic model MIKE11 and the VPMS-Lin models demonstrate the accuracy of the VMPS-Lin model, which besides enabling the estimation of streamflow, also enables the estimation of reach averaged optimal roughness coefficients for the considered routing events.
Quantifying the flow rate of the Deepwater Horizon Macondo Well oil spill
NASA Astrophysics Data System (ADS)
Camilli, R.; Bowen, A.; Yoerger, D. R.; Whitcomb, L. L.; Techet, A. H.; Reddy, C. M.; Sylva, S.; Seewald, J.; di Iorio, D.; Whoi Flow Rate Measurement Group
2010-12-01
The Deepwater Horizon blowout in the Mississippi Canyon block 252 of the Gulf of Mexico created the largest recorded offshore oil spill. The well outflow’s multiple leak sources, turbulent multiphase flow, tendency for hydrate formation, and extreme source depth of 1500 m below the sea surface complicated the quantitative estimation of oil and gas leakage rates. We present methods and results from a U.S. Coast Guard sponsored flow assessment study of the Deepwater Horizon’s damaged blow out preventer and riser. This study utilized a remotely operated vehicle equipped with in-situ acoustic sensors (a Doppler sonar and an imaging multibeam sonar) and isobaric gas-tight fluid samplers to measure directly outflow from the damaged well. Findings from this study indicate oil release rates and total release volume estimates that corroborate estimates made by the federal government’s Flow Rate Technical Group using non-acoustic techniques. The acoustic survey methods reported here provides a means for estimating fluid flow rates in subsurface environments, and are potentially useful for a diverse range of oceanographic applications. Photograph of the Discoverer Enterprise burning natural gas collected from the Macondo well blowout preventer during flow measurement operations. Copyright Wood Hole Oceanographic Institution.
NASA Technical Reports Server (NTRS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.;
2016-01-01
This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work wereported various rate estimates whose 90% confidence intervals fell in the range 2600 Gpc(exp -3) yr(exp -1). Here we givedetails on our method and computations, including information about our search pipelines, a derivation of ourlikelihood function for the analysis, a description of the astrophysical search trigger distribution expected frommerging BBHs, details on our computational methods, a description of the effects and our model for calibrationuncertainty, and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.
Heart rate detection from an electronic weighing scale.
González-Landaeta, R; Casas, O; Pallàs-Areny, R
2007-01-01
We propose a novel technique for heart rate detection on a subject that stands on a common electronic weighing scale. The detection relies on sensing force variations related to the blood acceleration in the aorta, works even if wearing footwear, and does not require any sensors attached to the body. We have applied our method to three different weighing scales, and estimated whether their sensitivity and frequency response suited heart rate detection. Scale sensitivities were from 490 nV/V/N to 1670 nV/V/N, all had an underdamped transient response and their dynamic gain error was below 19% at 10 Hz, which are acceptable values for heart rate estimation. We also designed a pulse detection system based on off-the-shelf integrated circuits, whose gain was about 70x10(3) and able to sense force variations about 240 mN. The signal-to-noise ratio (SNR) of the main peaks of the pulse signal detected was higher than 48 dB, which is large enough to estimate the heart rate by simple signal processing methods. To validate the method, the ECG and the force signal were simultaneously recorded on 12 volunteers. The maximal error obtained from heart rates determined from these two signals was +/-0.6 beats/minute.
Robust Characterization of Loss Rates
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2015-08-01
Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.
A simplified method for assessing particle deposition rate in aircraft cabins
NASA Astrophysics Data System (ADS)
You, Ruoyu; Zhao, Bin
2013-03-01
Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.
Density estimation in wildlife surveys
Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John
2004-01-01
Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.
Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares
NASA Astrophysics Data System (ADS)
Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen
2018-07-01
The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.
Improved arrival-date estimates of Arctic-breeding Dunlin (Calidris alpina arcticola)
Doll, Andrew C.; Lanctot, Richard B.; Stricker, Craig A.; Yezerinac, Stephen M.; Wunder, Michael B.
2015-01-01
The use of stable isotopes in animal ecology depends on accurate descriptions of isotope dynamics within individuals. The prevailing assumption that laboratory-derived isotopic parameters apply to free-living animals is largely untested. We used stable carbon isotopes (δ13C) in whole blood from migratory Dunlin (Calidris alpina arcticola) to estimate an in situ turnover rate and individual diet-switch dates. Our in situ results indicated that turnover rates were higher in free-living birds, in comparison to the results of an experimental study on captive Dunlin and estimates derived from a theoretical allometric model. Diet-switch dates from all 3 methods were then used to estimate arrival dates to the Arctic; arrival dates calculated with the in situ turnover rate were later than those with the other turnover-rate estimates, substantially so in some cases. These later arrival dates matched dates when local snow conditions would have allowed Dunlin to settle, and agreed with anticipated arrival dates of Dunlin tracked with light-level geolocators. Our study presents a novel method for accurately estimating arrival dates for individuals of migratory species in which return dates are difficult to document. This may be particularly appropriate for species in which extrinsic tracking devices cannot easily be employed because of cost, body size, or behavioral constraints, and in habitats that do not allow individuals to be detected easily upon first arrival. Thus, this isotopic method offers an exciting alternative approach to better understand how species may be altering their arrival dates in response to changing climatic conditions.
Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.
2012-01-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645
BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliadis, C.; Anderson, K. S.; Coc, A.
The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We presentmore » astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.« less
NASA Astrophysics Data System (ADS)
Zapata, D.; Salazar, M.; Chaves, B.; Keller, M.; Hoogenboom, G.
2015-12-01
Thermal time models have been used to predict the development of many different species, including grapevine ( Vitis vinifera L.). These models normally assume that there is a linear relationship between temperature and plant development. The goal of this study was to estimate the base temperature and duration in terms of thermal time for predicting veraison for four grapevine cultivars. Historical phenological data for four cultivars that were collected in the Pacific Northwest were used to develop the thermal time model. Base temperatures ( T b) of 0 and 10 °C and the best estimated T b using three different methods were evaluated for predicting veraison in grapevine. Thermal time requirements for each individual cultivar were evaluated through analysis of variance, and means were compared using the Fisher's test. The methods that were applied to estimate T b for the development of wine grapes included the least standard deviation in heat units, the regression coefficient, and the development rate method. The estimated T b varied among methods and cultivars. The development rate method provided the lowest T b values for all cultivars. For the three methods, Chardonnay had the lowest T b ranging from 8.7 to 10.7 °C, while the highest T b values were obtained for Riesling and Cabernet Sauvignon with 11.8 and 12.8 °C, respectively. Thermal time also differed among cultivars, when either the fixed or estimated T b was used. Predictions of the beginning of ripening with the estimated temperature resulted in the lowest variation in real days when compared with predictions using T b = 0 or 10 °C, regardless of the method that was used to estimate the T b.
A parallel implementation of a multisensor feature-based range-estimation method
NASA Technical Reports Server (NTRS)
Suorsa, Raymond E.; Sridhar, Banavar
1993-01-01
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.
Guo, Xiaoting; Sun, Changku; Wang, Peng
2017-08-01
This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.
U.S. broiler housing ammonia emissions inventory
NASA Astrophysics Data System (ADS)
Gates, R. S.; Casey, K. D.; Wheeler, E. F.; Xin, H.; Pescatore, A. J.
Using recently published baseline ammonia emissions data for U.S. broiler chicken housing, we present a method of estimating their contribution to an annual ammonia budget that is different from that used by USEPA. Emission rate increases in a linear relationship with flock age from near zero at the start of the flock to a maximum at the end of the flock, 28-65 days later. Market weight of chickens raised for meat varies from "broilers" weighing about 2 kg to "roasters" weighing about 3 kg. Multiple flocks of birds are grown in a single house annually, with variable downtime to prepare the house between flocks. The method takes into account weight and number of chickens marketed. Uncertainty in baseline emissions estimates is used so that inventory estimates are provided with error estimates. The method also incorporates the condition of litter that birds are raised upon and the varying market weight of birds grown. Using 2003 USDA data on broiler production numbers, broiler housing is estimated to contribute 8.8-11.7 kT ammonia for new and built-up litter, respectively, in Kentucky and 240-324 kT ammonia for new and built-up litter, respectively, nationally. Results suggest that a 10% uncertainty in annual emission rate is expected for the market weight categories of broilers, heavy broilers, and roasters. A 27-47% reduction in annual housing emission rate is predicted if new rather than built-up litter were used for every flock. The estimating method can be adapted to other meat bird building emissions and future ammonia emission strategies, with suitable insertion of an age-dependent emission factor or slope into a predictive model equation. The method can be readily applied and is an alternative to that used by USEPA.
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Estimation of evapotranspiration rate in irrigated lands using stable isotopes
NASA Astrophysics Data System (ADS)
Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg
2013-04-01
Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.
Empirical evaluation of the market price of risk using the CIR model
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Torosantucci, L.; Uboldi, A.
2007-03-01
We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.
Accounting for rate variation among lineages in comparative demographic analyses
Hope, Andrew G.; Ho, Simon Y. W.; Malaney, Jason L.; Cook, Joseph A.; Talbot, Sandra L.
2014-01-01
Genetic analyses of contemporary populations can be used to estimate the demographic histories of species within an ecological community. Comparison of these demographic histories can shed light on community responses to past climatic events. However, species experience different rates of molecular evolution, and this presents a major obstacle to comparative demographic analyses. We address this problem by using a Bayesian relaxed-clock method to estimate the relative evolutionary rates of 22 small mammal taxa distributed across northwestern North America. We found that estimates of the relative molecular substitution rate for each taxon were consistent across the range of sampling schemes that we compared. Using three different reference rates, we rescaled the relative rates so that they could be used to estimate absolute evolutionary timescales. Accounting for rate variation among taxa led to temporal shifts in our skyline-plot estimates of demographic history, highlighting both uniform and idiosyncratic evolutionary responses to directional climate trends for distinct ecological subsets of the small mammal community. Our approach can be used in evolutionary analyses of populations from multiple species, including comparative demographic studies.
2017-12-01
values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression
Effect of delayed reporting of band recoveries on survival estimates
Anderson, David R.; Burnham, Kenneth P.
1980-01-01
Brownie et al. (U.S. Fish and Wildl. Serv., Resource Publ. 131, 1978) presented 14 models based on an array of explicit assumptions for the study of survival in avian populations. These methods are replacing the life table methods previously used to estimate survival rates (e.g., Burnham and Anderson, J. Wildl. Manage., 43: 356-366, 1979). The new methods allow survival or recovery rates, or both, to be constant, time-specific, or time- and age-specific. In studies to estimate survival rates for birds the data are often from recoveries of birds shot or found dead during the hunting season and reported to the Bird Banding Laboratory by sportsmen, conservation agency employees, or the general public. This note examines the bias in estimating annual survival due to a proportion of the recoveries being incorrectly reported a year late. Specifically, a few recoveries each year of, for example, adult male American Widgeon (Anas americana) banded in California are reported as being recovered in year i + 1 when in fact they were actually recovered the previous year i. Delayed reporting might typically be caused by people finding a band in their health clothing in the fall of the year and, being embarrassed about their failure to report the band when it was taken, report it a year late not mentioning the actual year of recovery. Heuristically, delayed reporting should bias estimated annual survival rates upwards because it appears from the data that the birds corresponding to the "delayed" recoveries actually lived an additional year.
Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J
2012-07-01
1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
A Novel Estimator for the Rate of Information Transfer by Continuous Signals
Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko
2011-01-01
The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Multiple imputation for cure rate quantile regression with censored data.
Wu, Yuanshan; Yin, Guosheng
2017-03-01
The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data sets. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Towards Photoplethysmography-Based Estimation of Instantaneous Heart Rate During Physical Activity.
Jarchi, Delaram; Casson, Alexander J
2017-09-01
Recently numerous methods have been proposed for estimating average heart rate using photoplethysmography (PPG) during physical activity, overcoming the significant interference that motion causes in PPG traces. We propose a new algorithm framework for extracting instantaneous heart rate from wearable PPG and Electrocardiogram (ECG) signals to provide an estimate of heart rate variability during exercise. For ECG signals, we propose a new spectral masking approach which modifies a particle filter tracking algorithm, and for PPG signals constrains the instantaneous frequency obtained from the Hilbert transform to a region of interest around a candidate heart rate measure. Performance is verified using accelerometry and wearable ECG and PPG data from subjects while biking and running on a treadmill. Instantaneous heart rate provides more information than average heart rate alone. The instantaneous heart rate can be extracted during motion to an accuracy of 1.75 beats per min (bpm) from PPG signals and 0.27 bpm from ECG signals. Estimates of instantaneous heart rate can now be generated from PPG signals during motion. These estimates can provide more information on the human body during exercise. Instantaneous heart rate provides a direct measure of vagal nerve and sympathetic nervous system activity and is of substantial use in a number of analyzes and applications. Previously it has not been possible to estimate instantaneous heart rate from wrist wearable PPG signals.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Rocket Based Combined Cycle Exchange Inlet Performance Estimation at Supersonic Speeds
NASA Astrophysics Data System (ADS)
Murzionak, Aliaksandr
A method to estimate the performance of an exchange inlet for a Rocket Based Combined Cycle engine is developed. This method is to be used for exchange inlet geometry optimization and as such should be able to predict properties that can be used in the design process within a reasonable amount of time to allow multiple configurations to be evaluated. The method is based on a curve fit of the shocks developed around the major components of the inlet using solutions for shocks around sharp cones and 2D estimations of the shocks around wedges with blunt leading edges. The total pressure drop across the estimated shocks as well as the mass flow rate through the exchange inlet are calculated. The estimations for a selected range of free-stream Mach numbers between 1.1 and 7 are compared against numerical finite volume method simulations which were performed using available commercial software (Ansys-CFX). The total pressure difference between the two methods is within 10% for the tested Mach numbers of 5 and below, while for the Mach 7 test case the difference is 30%. The mass flow rate on average differs by less than 5% for all tested cases with the maximum difference not exceeding 10%. The estimation method takes less than 3 seconds on 3.0 GHz single core processor to complete the calculations for a single flight condition as oppose to over 5 days on 8 cores at 2.4 GHz system while using 3D finite volume method simulation with 1.5 million elements mesh. This makes the estimation method suitable for the use with exchange inlet geometry optimization algorithm.
Renninger, Heidi J.; Schäfer, Karina V. R.
2012-01-01
Sap flow measurements have become integral in many physiological and ecological investigations. A number of methods are used to estimate sap flow rates in trees, but probably the most popular is the thermal dissipation (TD) method because of its affordability, relatively low power consumption, and ease of use. However, there have been questions about the use of this method in ring-porous species and whether individual species and site calibrations are needed. We made concurrent measurements of sap flow rates using TD sensors and the tissue heat balance (THB) method in two oak species (Quercus prinus Willd. and Quercus velutina Lam.) and one pine (Pinus echinata Mill.). We also made concurrent measurements of sap flow rates using both 1 and 2-cm long TD sensors in both oak species. We found that both the TD and THB systems tended to match well in the pine individual, but sap flow rates were underestimated by 2-cm long TD sensors in five individuals of the two ring-porous oak species. Underestimations of 20–35% occurred in Q. prinus even when a “Clearwater” correction was applied to account for the shallowness of the sapwood depth relative to the sensor length and flow rates were underestimated by up to 50% in Q. velutina. Two centimeter long TD sensors also underestimated flow rates compared with 1-cm long sensors in Q. prinus, but only at large flow rates. When 2-cm long sensor data in Q. prinus were scaled using the regression with 1-cm long data, daily flow rates matched well with the rates measured by the THB system. Daily plot level transpiration estimated using TD sap flow rates and scaled 1 cm sensor data averaged about 15% lower than those estimated by the THB method. Therefore, these results suggest that 1-cm long sensors are appropriate in species with shallow sapwood, however more corrections may be necessary in ring-porous species. PMID:22661978
Ju, Lining; Wang, Yijie Dylan; Hung, Ying; Wu, Chien-Fu Jeff; Zhu, Cheng
2013-01-01
Motivation: Abrupt reduction/resumption of thermal fluctuations of a force probe has been used to identify association/dissociation events of protein–ligand bonds. We show that off-rate of molecular dissociation can be estimated by the analysis of the bond lifetime, while the on-rate of molecular association can be estimated by the analysis of the waiting time between two neighboring bond events. However, the analysis relies heavily on subjective judgments and is time-consuming. To automate the process of mapping out bond events from thermal fluctuation data, we develop a hidden Markov model (HMM)-based method. Results: The HMM method represents the bond state by a hidden variable with two values: bound and unbound. The bond association/dissociation is visualized and pinpointed. We apply the method to analyze a key receptor–ligand interaction in the early stage of hemostasis and thrombosis: the von Willebrand factor (VWF) binding to platelet glycoprotein Ibα (GPIbα). The numbers of bond lifetime and waiting time events estimated by the HMM are much more than those estimated by a descriptive statistical method from the same set of raw data. The kinetic parameters estimated by the HMM are in excellent agreement with those by a descriptive statistical analysis, but have much smaller errors for both wild-type and two mutant VWF-A1 domains. Thus, the computerized analysis allows us to speed up the analysis and improve the quality of estimates of receptor–ligand binding kinetics. Contact: jeffwu@isye.gatech.edu or cheng.zhu@bme.gatech.edu PMID:23599504
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
A method for estimating fall adult sex ratios from production and survival data
Wight, H.M.; Heath, R.G.; Geis, A.D.
1965-01-01
This paper presents a method of utilizing data relating to the production and survival of a bird population to estimate a basic fall adult sex ratio. This basic adult sex ratio is an average value derived from average production and survival rates. It is an estimate of the average sex ratio about which the fall adult ratios will fluctuate according to annual variations in production and survival. The basic fall adult sex ratio has been calculated as an asymptotic value which is the limit of an infinite series wherein average population characteristics are used as constants. Graphs are provided that allow the determination of basic sex ratios from production and survival data of a population. Where the respective asymptote has been determined, it may be possible to estimate various production and survival rates by use of variations of the formula for estimating the asymptote.
A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.
NASA Astrophysics Data System (ADS)
Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.
2017-12-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.
Determination of water use in Rockford and Kankakee areas, Illinois
LaTour, John K.
1991-01-01
Amounts of water withdrawn, delivered, consumed, released, returned, and lost or gained during conveyance were determined for six communities--Rockford, Loves Park, North Park, Kankakee, Bourbonnais, and Bradley--served by the public-water systems in the Rockford and the Kankakee areas of Illinois. Water-use categories studied were commercial, industrial, domestic, and municipal uses; public supply; and sewage treatment. The availability and accuracy of water-use data are described, and water-use coefficients and methods of estimating water use are provided to improve the collection and the analysis of water-use information. Water-use data were obtained from all the water utilities and from 30 major water users in the Rockford and the Kankakee areas. Data were available for water withdrawals by water suppliers; deliveries by water suppliers to water users; returns by sewage-treatment plants and water users; releases by water users to sewers; and sewer-conveyance losses. Accuracy of the water-use data was determined from discharge measurements or reliability tests of water meters, or was estimated according to the completeness of the data. Accuracy of withdrawal and sewage-treatment-return data for the Rockford area and of withdrawal, delivery, industrial release, and sewage-treatment-return data for the Kankakee area was considered to be at least 90 percent. Where water-use data were inadequate or unavailable, various methods were used to estimate consumptive uses; releases; returns by commercial, domestic, and municipal users; and conveyance losses and gains. The methods focused on water budgeting to assure that water uses balanced. Consumptive uses were estimated by use of the consumption-budget method, the types-of-use method, consumptive-use ratios, the winter base-rate method, and the maximum lawn-watering method. The winter base-rate method provided the best domestic consumptive-use estimates, whose ratios (consumptive use from the winter base-rate method divided by deliveries and self-supply withdrawals), by community, ranged from 0.03 to 0.136 and averaged 0.068. The consumption-budget and types-of-use methods, as well as consumptive-use ratios, were used to estimate consumptive use for commercial, industrial, and municipal categories. Water budgeting was generally used to estimate releases, and conveyance losses and gains. Estimates of nonconsumptive uses by cooling systems, boilers, and lawn watering; data of deliveries to septic-system owners; and (or) water budgeting were used to estimate commercial, domestic, industrial, and municipal returns. Proportions of water use were similar in the Rockford and the Kankakee areas. Of the public-supply withdrawals in each area, about one-half was delivered for commercial and industrial uses; about one-third for domestic use; and about one-sixth for municipal use and public-supply conveyance losses.Consumptive use by all water users in the Rockford and the Kankakee areas was 13 +/- 1 percent, releases were 78 +/- 2 percent, and returns were 9 +/- 2 percent of deliveries and self-supply withdrawals. Total returns were greater than total withdrawals in the two areas because-of sewer-conveyance gains, which amounted to about 34 percent of the sewage-treatment returns for each area. Delivery rates (deliveries divided by the number of users [establishments or households]) and domestic per capita use were similar for all six communities. At a 95-percent confidence level, domestic delivery rates for each community range from 0.067 to 0.075 million gallons per household per year. Commercial delivery rates range from 0.277 to 0.535 million gallons per establishment per year. Delivery rates for all categories combined range from 0.100 to 0.192 million gallons per user per year. Domestic per capita use, which ranged from 67.2 to 71.0 gallons per day, averaged 69.2 +/- 1.1 gallons per day.
Estimating malaria transmission from humans to mosquitoes in a noisy landscape
Reiner, Robert C.; Guerra, Carlos; Donnelly, Martin J.; Bousema, Teun; Drakeley, Chris; Smith, David L.
2015-01-01
A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the ‘colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. PMID:26400195
Lewicki, Jennifer L.; Bergfeld, Deborah; Cardellini, Carlo; Chiodini, Giovanni; Granieri, Domenico; Varley, Nick; Werner, Cynthia A.
2005-01-01
We present a comparative study of soil CO2 flux (FCO2">FCO2) measured by five groups (Groups 1–5) at the IAVCEI-CCVG Eighth Workshop on Volcanic Gases on Masaya volcano, Nicaragua. Groups 1–5 measured FCO2 using the accumulation chamber method at 5-m spacing within a 900 m2 grid during a morning (AM) period. These measurements were repeated by Groups 1–3 during an afternoon (PM) period. Measured FCO2 ranged from 218 to 14,719 g m−2 day−1. The variability of the five measurements made at each grid point ranged from ±5 to 167%. However, the arithmetic means of fluxes measured over the entire grid and associated total CO2 emission rate estimates varied between groups by only ±22%. All three groups that made PM measurements reported an 8–19% increase in total emissions over the AM results. Based on a comparison of measurements made during AM and PM times, we argue that this change is due in large part to natural temporal variability of gas flow, rather than to measurement error. In order to estimate the mean and associated CO2 emission rate of one data set and to map the spatial FCO2 distribution, we compared six geostatistical methods: arithmetic and minimum variance unbiased estimator means of uninterpolated data, and arithmetic means of data interpolated by the multiquadric radial basis function, ordinary kriging, multi-Gaussian kriging, and sequential Gaussian simulation methods. While the total CO2 emission rates estimated using the different techniques only varied by ±4.4%, the FCO2 maps showed important differences. We suggest that the sequential Gaussian simulation method yields the most realistic representation of the spatial distribution of FCO2, but a variety of geostatistical methods are appropriate to estimate the total CO2 emission rate from a study area, which is a primary goal in volcano monitoring research.
Comparison of different methods for gender estimation from face image of various poses
NASA Astrophysics Data System (ADS)
Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko
2003-04-01
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
A method for estimating radioactive cesium concentrations in cattle blood using urine samples.
Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji
2017-12-01
In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.
Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.
2015-01-01
Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.
The total rate of mass return to the interstellar medium from red giants and planetary nebulae
NASA Technical Reports Server (NTRS)
Knapp, G. R.; Rauch, K. P.; Wilcots, E. M.
1990-01-01
High luminosity post main sequence stars are observed to be losing mass in large amounts into the interstellar medium. The various methods used to estimate individual and total mass loss rates are summarized. Current estimates give MT 0.3 - 0.6 solar mass per year for the whole Galaxy.
Estimating carnivoran diets using a combination of carcass observations and scats from GPS clusters
Tambling, C.J.; Laurence, S.D.; Bellan, S.E.; Cameron, E.Z.; du Toit, J.T.; Getz, W.M.
2011-01-01
Scat analysis is one of the most frequently used methods to assess carnivoran diets and Global Positioning System (GPS) cluster methods are increasingly being used to locate feeding sites for large carnivorans. However, both methods have inherent biases that limit their use. GPS methods to locate kill sites are biased towards large carcasses, while scat analysis over-estimates the biomass consumed from smaller prey. We combined carcass observations and scats collected along known movement routes, assessed using GPS data from four African lion (Panthera leo) prides in the Kruger National Park, South Africa, to determine how a combination of these two datasets change diet estimates. As expected, using carcasses alone under-estimated the number of feeding events on small species, primarily impala (Aepyceros melampus) and warthog (Phacochoerus africanus), in our case by more than 50% and thus significantly under-estimated the biomass consumed per pride per day in comparison to when the diet was assessed using carcass observations alone. We show that an approach that supplements carcass observations with scats that enables the identification of potentially missed feeding events increases the estimates of food intake rates for large carnivorans, with possible ramifications for predator-prey interaction studies dealing with biomass intake rate. PMID:22408290
2018-01-01
The ability of human immunodeficiency virus (HIV) to avoid recognition by humoral and cellular immunity (viral escape) is well-documented, but the strength of the immune response needed to cause such a viral escape remains poorly quantified. Several previous studies observed a more rapid escape of HIV from CD8 T cell responses in the acute phase of infection compared to chronic infection. The rate of HIV escape was estimated with the help of simple mathematical models, and results were interpreted to suggest that CD8 T cell responses causing escape in acute HIV infection may be more efficient at killing virus-infected cells than responses that cause escape in chronic infection, or alternatively, that early escapes occur in epitopes mutations in which there is minimal fitness cost to the virus. However, these conclusions were challenged on several grounds, including linkage and interference of multiple escape mutations due to a low population size and because of potential issues associated with modifying the data to estimate escape rates. Here we use a sampling method which does not require data modification to show that previous results on the decline of the viral escape rate with time since infection remain unchanged. However, using this method we also show that estimates of the escape rate are highly sensitive to the time interval between measurements, with longer intervals biasing estimates of the escape rate downwards. Our results thus suggest that data modifications for early and late escapes were not the primary reason for the observed decline in the escape rate with time since infection. However, longer sampling periods for escapes in chronic infection strongly influence estimates of the escape rate. More frequent sampling of viral sequences in chronic infection may improve our understanding of factors influencing the rate of HIV escape from CD8 T cell responses. PMID:29495443
Ganusov, Vitaly V
2018-02-27
The ability of human immunodeficiency virus (HIV) to avoid recognition by humoral and cellular immunity (viral escape) is well-documented, but the strength of the immune response needed to cause such a viral escape remains poorly quantified. Several previous studies observed a more rapid escape of HIV from CD8 T cell responses in the acute phase of infection compared to chronic infection. The rate of HIV escape was estimated with the help of simple mathematical models, and results were interpreted to suggest that CD8 T cell responses causing escape in acute HIV infection may be more efficient at killing virus-infected cells than responses that cause escape in chronic infection, or alternatively, that early escapes occur in epitopes mutations in which there is minimal fitness cost to the virus. However, these conclusions were challenged on several grounds, including linkage and interference of multiple escape mutations due to a low population size and because of potential issues associated with modifying the data to estimate escape rates. Here we use a sampling method which does not require data modification to show that previous results on the decline of the viral escape rate with time since infection remain unchanged. However, using this method we also show that estimates of the escape rate are highly sensitive to the time interval between measurements, with longer intervals biasing estimates of the escape rate downwards. Our results thus suggest that data modifications for early and late escapes were not the primary reason for the observed decline in the escape rate with time since infection. However, longer sampling periods for escapes in chronic infection strongly influence estimates of the escape rate. More frequent sampling of viral sequences in chronic infection may improve our understanding of factors influencing the rate of HIV escape from CD8 T cell responses.
Optimizing hidden layer node number of BP network to estimate fetal weight
NASA Astrophysics Data System (ADS)
Su, Juan; Zou, Yuanwen; Lin, Jiangli; Wang, Tianfu; Li, Deyu; Xie, Tao
2007-12-01
The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods, monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.
Calibrating recruitment estimates for mourning doves from harvest age ratios
Miller, David A.; Otis, David L.
2010-01-01
We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.
Hicks, Olivia; Burthe, Sarah; Daunt, Francis; Butler, Adam; Bishop, Charles; Green, Jonathan A
2017-05-15
Two main techniques have dominated the field of ecological energetics: the heart rate and doubly labelled water methods. Although well established, they are not without their weaknesses, namely expense, intrusiveness and lack of temporal resolution. A new technique has been developed using accelerometers; it uses the overall dynamic body acceleration (ODBA) of an animal as a calibrated proxy for energy expenditure. This method provides high-resolution data without the need for surgery. Significant relationships exist between the rate of oxygen consumption ( V̇ O 2 ) and ODBA in controlled conditions across a number of taxa; however, it is not known whether ODBA represents a robust proxy for energy expenditure consistently in all natural behaviours and there have been specific questions over its validity during diving, in diving endotherms. Here, we simultaneously deployed accelerometers and heart rate loggers in a wild population of European shags ( Phalacrocorax aristotelis ). Existing calibration relationships were then used to make behaviour-specific estimates of energy expenditure for each of these two techniques. Compared with heart rate-derived estimates, the ODBA method predicts energy expenditure well during flight and diving behaviour, but overestimates the cost of resting behaviour. We then combined these two datasets to generate a new calibration relationship between ODBA and V̇ O 2 that accounts for this by being informed by heart rate-derived estimates. Across behaviours we found a good relationship between ODBA and V̇ O 2 Within individual behaviours, we found useable relationships between ODBA and V̇ O 2 for flight and resting, and a poor relationship during diving. The error associated with these new calibration relationships mostly originates from the previous heart rate calibration rather than the error associated with the ODBA method. The equations provide tools for understanding how energy constrains ecology across the complex behaviour of free-living diving birds. © 2017. Published by The Company of Biologists Ltd.
Automatable Measurement of Gas Exchange Rate in Streams: Oxygen-Carbon Method
NASA Astrophysics Data System (ADS)
Pennington, R.; Haggerty, R.; Argerich, A.; Wondzell, S. M.
2015-12-01
Gas exchange rates between streams and the atmosphere are critically important to measurement of in-stream ecologic processes, as well as fate and transport of hazardous pollutants such as mercury and PCBs. Methods to estimate gas exchange rates include empirical relations to hydraulics, and direct injection of a tracer gas such as propane or SF6. Empirical relations are inconsistent and inaccurate, particularly for lower order, high-roughness streams. Gas injections are labor-intensive, and measured gas exchange rates are difficult to extrapolate in time since they change with discharge and stream geometry. We propose a novel method for calculation of gas exchange rates utilizing O2, pCO2, pH, and temperature data. Measurements, which can be automated using data loggers and probes, are made on the upstream and downstream end of the study reach. Gas exchange rates are then calculated from a solution to the transport equations for oxygen and dissolved inorganic carbon. Field tests in steep, low order, high roughness streams of the HJ Andrews Experimental Forest indicate the method to be viable along stream reaches with high downstream gas concentration gradients and high rates of gas transfer velocity. Automated and continuous collection of oxygen and carbonate chemistry data is increasingly common, thus the method may be used to estimate gas exchange rates through time, and is well suited for interactivity with databases.
A comparison of the weights-of-evidence method and probabilistic neural networks
Singer, Donald A.; Kouda, Ryoichi
1999-01-01
The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of −1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.
Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network
NASA Astrophysics Data System (ADS)
Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea
Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.
Exact Bayesian Inference for Phylogenetic Birth-Death Models.
Parag, K V; Pybus, O G
2018-04-26
Inferring the rates of change of a population from a reconstructed phylogeny of genetic sequences is a central problem in macro-evolutionary biology, epidemiology, and many other disciplines. A popular solution involves estimating the parameters of a birth-death process (BDP), which links the shape of the phylogeny to its birth and death rates. Modern BDP estimators rely on random Markov chain Monte Carlo (MCMC) sampling to infer these rates. Such methods, while powerful and scalable, cannot be guaranteed to converge, leading to results that may be hard to replicate or difficult to validate. We present a conceptually and computationally different parametric BDP inference approach using flexible and easy to implement Snyder filter (SF) algorithms. This method is deterministic so its results are provable, guaranteed, and reproducible. We validate the SF on constant rate BDPs and find that it solves BDP likelihoods known to produce robust estimates. We then examine more complex BDPs with time-varying rates. Our estimates compare well with a recently developed parametric MCMC inference method. Lastly, we performmodel selection on an empirical Agamid species phylogeny, obtaining results consistent with the literature. The SF makes no approximations, beyond those required for parameter quantisation and numerical integration, and directly computes the posterior distribution of model parameters. It is a promising alternative inference algorithm that may serve either as a standalone Bayesian estimator or as a useful diagnostic reference for validating more involved MCMC strategies. The Snyder filter is implemented in Matlab and the time-varying BDP models are simulated in R. The source code and data are freely available at https://github.com/kpzoo/snyder-birth-death-code. kris.parag@zoo.ox.ac.uk. Supplementary material is available at Bioinformatics online.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-05-18
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.
Estimating removal rates of bacteria from poultry carcasses using two whole-carcass rinse volumes
USDA-ARS?s Scientific Manuscript database
Rinse sampling is a common method for determining the level of microbial contamination on poultry carcasses. One of the advantages of rinse sampling, over other carcass sampling methods, is that the results can be used for both process control applications and to estimate the total microbial level o...
2013-01-01
Background National smoking-specific lung cancer mortality rates are unavailable, and studies presenting estimates are limited, particularly by histology. This hinders interpretation. We attempted to rectify this by deriving estimates indirectly, combining data from national rates and epidemiological studies. Methods We estimated study-specific absolute mortality rates and variances by histology and smoking habit (never/ever/current/former) based on relative risk estimates derived from studies published in the 20th century, coupled with WHO mortality data for age 70–74 for the relevant country and period. Studies with populations grossly unrepresentative nationally were excluded. 70–74 was chosen based on analyses of large cohort studies presenting rates by smoking and age. Variations by sex, period and region were assessed by meta-analysis and meta-regression. Results 148 studies provided estimates (Europe 59, America 54, China 22, other Asia 13), 54 providing estimates by histology (squamous cell carcinoma, adenocarcinoma). For all smoking habits and lung cancer types, mortality rates were higher in males, the excess less evident for never smokers. Never smoker rates were clearly highest in China, and showed some increasing time trend, particularly for adenocarcinoma. Ever smoker rates were higher in parts of Europe and America than in China, with the time trend very clear, especially for adenocarcinoma. Variations by time trend and continent were clear for current smokers (rates being higher in Europe and America than Asia), but less clear for former smokers. Models involving continent and trend explained much variability, but non-linearity was sometimes seen (with rates lower in 1991–99 than 1981–90), and there was regional variation within continent (with rates in Europe often high in UK and low in Scandinavia, and higher in North than South America). Conclusions The indirect method may be questioned, because of variations in definition of smoking and lung cancer type in the epidemiological database, changes over time in diagnosis of lung cancer types, lack of national representativeness of some studies, and regional variation in smoking misclassification. However, the results seem consistent with the literature, and provide additional information on variability by time and region, including evidence of a rise in never smoker adenocarcinoma rates relative to squamous cell carcinoma rates. PMID:23570286
van Sighem, Ard; Sabin, Caroline A.; Phillips, Andrew N.
2015-01-01
Background It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). Methods The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. Results For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150–199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150–199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29–100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. Conclusions The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART. PMID:25768925
Estimating the Attack Rate of Pregnancy-Associated Listeriosis during a Large Outbreak
Imanishi, Maho; Routh, Janell A.; Klaber, Marigny; Gu, Weidong; Vanselow, Michelle S.; Jackson, Kelly A.; Sullivan-Chang, Loretta; Heinrichs, Gretchen; Jain, Neena; Albanese, Bernadette; Callaghan, William M.; Mahon, Barbara E.; Silk, Benjamin J.
2015-01-01
Background. In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. Methods. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. Results. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500–12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Conclusions. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure. PMID:25784782
Barfod, I H; Barfod, N M
1980-01-01
A new method for the evaluation of cell production rates combining flow cytometry (FCM) and the stathmokinetic method using vincristine sulphate (VS) has been used for the analysis of three aneuploid ascites tumours at different stages of growth. Using this technique it was possible to estimate the well-known decrease in cell production rates of ageing ascites tumours. The percentage of normal host cells in the aneuploid tumours studied was easily determined by FCM prior to the calculation of the tumour cell-production rates. A correlation was found between the percentage of tumour cells in the S phase and the tumour cell-production rate. This correlation is probably explained by the gradual transfer of proliferating cells in S phase to resting G1 and G2 phases with increasing tumour age.
Bioaccumulation Study at Puffer Pond
1994-10-01
organism if the rate of intake of the pollutant is greater than the rate of excretion and/or metabolism . The result is an increase in body burden...PARAMETERS FOR FISH TISSUE SAMPLES Method Analyte Estimated Method Detection Limit (pg/g) ORGANOPHOSPHORUS PESTICIDES ATRAZINE 2.5 VAPONA 4.5 MALATHION
A Bayesian framework to estimate diversification rates and their variation through time and space
2011-01-01
Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae) and Lupinus (Fabaceae). In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling. PMID:22013891
Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis
Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin
2016-01-01
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687
Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.
Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin
2016-06-28
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.
Comparison of three techniques for estimating the forage intake of lactating dairy cows on pasture.
Macoon, B; Sollenberger, L E; Moore, J E; Staples, C R; Fike, J H; Portier, K M
2003-09-01
Quantifying DMI is necessary for estimation of nutrient consumption by ruminants, but it is inherently difficult on grazed pastures and even more so when supplements are fed. Our objectives were to compare three methods of estimating forage DMI (inference from animal performance, evaluation from fecal output using a pulse-dose marker, and estimation from herbage disappearance methods) and to identify the most useful approach or combination of approaches for estimating pasture intake by lactating dairy cows. During three continuous 28-d periods in the winter season, Holstein cows (Bos taurus; n = 32) grazed a cool-season grass or a cool-season grass-clover mixture at two stocking rates (SR; 5 vs. 2.5 cows/ha) and were fed two rates of concentrate supplementation (CS; 1 kg of concentrate [as-fed] per 2.5 or 3.5 kg of milk produced). Animal response data used in computations for the animal performance method were obtained from the latter 14 d of each period. For the pulse-dose marker method, chromium-mordanted fiber was used. Pasture sampling to determine herbage disappearance was done weekly throughout the study. Forage DMI estimated by the animal performance method was different among periods (P < 0.001; 6.5, 6.4, and 9.6 kg/d for Periods 1, 2, and 3, respectively), between SR (P < 0.001; 8.7 [low SR] vs. 6.3 kg/d [high SR]) and between CS (P < 0.01; 8.4 [low CS] vs. 6.6 kg/d [high CS]). The period and SR effect seemed to be related to forage mass. The pulse-dose marker method generally provided greater estimates of forage DMI (as much as 11.0 kg/d more than the animal performance method) and was not correlated with the other methods. Estimates of forage DMI by the herbage disappearance method were correlated with the animal performance method. The difference between estimates from these two methods, ranging from -4.7 to 5.4 kg/d, were much lower than their difference from pulse-dose marker estimates. The results of this study suggest that, when appropriate for the research objectives, the animal performance or herbage disappearance methods may be useful and less costly alternatives to using the pulse-dose method.
Novaes, Renan M L; Pazianotto, Ricardo A A; Brandão, Miguel; Alves, Bruno J R; May, André; Folegatti-Matsuura, Marília I S
2017-09-01
Land-use change (LUC) in Brazil has important implications on global climate change, ecosystem services and biodiversity, and agricultural expansion plays a critical role in this process. Concerns over these issues have led to the need for estimating the magnitude and impacts associated with that, which are increasingly reported in the environmental assessment of products. Currently, there is an extensive debate on which methods are more appropriate for estimating LUC and related emissions and regionalized estimates are lacking for Brazil, which is a world leader in agricultural production (e.g. food, fibres and bioenergy). We developed a method for estimating scenarios of past 20-year LUC and derived CO 2 emission rates associated with 64 crops, pasture and forestry in Brazil as whole and in each of its 27 states, based on time-series statistics and in accordance with most used carbon-footprinting standards. The scenarios adopted provide a range between minimum and maximum rates of CO 2 emissions from LUC according to different possibilities of land-use transitions, which can have large impacts in the results. Specificities of Brazil, like multiple cropping and highly heterogeneous carbon stocks, are also addressed. The highest CO 2 emission rates are observed in the Amazon biome states and crops with the highest rates are those that have undergone expansion in this region. Some states and crops showing large agricultural areas have low emissions associated, especially in southern and eastern Brazil. Native carbon stocks and time of agricultural expansion are the most decisive factors to the patterns of emissions. Some implications on LUC estimation methods and standards and on agri-environmental policies are discussed. © 2017 John Wiley & Sons Ltd.
Continuous measurements of flow rate in a shallow gravel-bed river by a new acoustic system
NASA Astrophysics Data System (ADS)
Kawanisi, K.; Razaz, M.; Ishikawa, K.; Yano, J.; Soltaniasl, M.
2012-05-01
The continuous measurement of river discharge for long periods of time is crucial in water resource studies. However, the accurate estimation of river discharge is a difficult and labor-intensive procedure; thus, a robust and efficient method of measurement is required. Continuous measurements of flowrate have been carried out in a wide, shallow gravel bed river (water depth ≈ 0.6 m under low-flow conditions, width ≈ 115 m) using Fluvial Acoustic Tomography System (FATS) that has 25 kHz broadband transducers with horizontally omnidirectional and vertically hemispherical beam patterns. Reciprocal sound transmissions were performed between the two acoustic stations located diagonally on both sides of the river. The horizontal distance between the transducers was 301.96 m. FATS enabled the measurement of the depth- and range-averaged sound speed and flow velocity along the ray path. In contrast to traditional point/transect measurements of discharge, in a fraction of a second, FATS covers the entire cross section of river in a single measurement. The flow rates measured by FATS were compared to those estimated by moving boat Acoustic Doppler Current Profiler (ADCP) and rating curve (RC) methods. FATS estimates were in good agreement with ADCP estimates over a range of 20 to 65 m3 s-1. The RMS of residual between the two measurements was 2.41 m3 s-1. On the other hand the flowrate by RC method fairly agreed with FATS estimates for greater discharges than around 40 m3 s-1. This inconsistency arises from biased RC estimates in low flows. Thus, the flow rates derived from FATS could be considered reliable.
Barfod, I H; Barfod, N M
1980-01-01
A method for the evaluation of cell-production rates is described which combines flow cytometry (FCM) and the stathmokinetic method. By means of FCM it is possible to estimate the distribution of cells with G1, S and (G2 + M) DNA content in a population. As this method gives the relative (G2 + M) DNA content of cells within the cell cycle, it may be possible to evaluate cell-production rates by this technique. In the present study it was found that administration of a metaphase-arresting (stathmokinetic) agent, vincristine sulphate (VS), to asynchronous cell populations of three different murine tumour cell lines in vitro increased the peak representing cells with (G2 + M) DNA content as the number of mitotic (M) cells increased during the period of treatment. The accumulation of mitotic cells was determined by cell counts on smears under the microscope and compared with increase in the (G2 + M) DNA peak measured by FCM as a function of time after the administration of VS. Good agreement was obtained between the cell-production rates as estimated by FCM and by mitotic counts in all three cell lines investigated.
FRAGS: estimation of coding sequence substitution rates from fragmentary data
Swart, Estienne C; Hide, Winston A; Seoighe, Cathal
2004-01-01
Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
NASA Astrophysics Data System (ADS)
Ryabova, A. V.; Stratonnikov, Aleksandr A.; Loshchenov, V. B.
2006-06-01
A fast and highly informative method is presented for estimating the photodynamic activity of photosensitisers. The method makes it possible to determine the rate of photodegradation in erythrocyte-containing biological media in nearly in vivo conditions, estimate the degree of irreversible binding of oxygen dissolved in the medium during laser irradiation in the presence of photosensitisers, and determine the nature of degradation of photosensitisers exposed to light (photobleaching).
Comparing methodologies for the allocation of overhead and capital costs to hospital services.
Tan, Siok Swan; van Ineveld, Bastianus Martinus; Redekop, William Ken; Hakkaart-van Roijen, Leona
2009-06-01
Typically, little consideration is given to the allocation of indirect costs (overheads and capital) to hospital services, compared to the allocation of direct costs. Weighted service allocation is believed to provide the most accurate indirect cost estimation, but the method is time consuming. To determine whether hourly rate, inpatient day, and marginal mark-up allocation are reliable alternatives for weighted service allocation. The cost approaches were compared independently for appendectomy, hip replacement, cataract, and stroke in representative general hospitals in The Netherlands for 2005. Hourly rate allocation and inpatient day allocation produce estimates that are not significantly different from weighted service allocation. Hourly rate allocation may be a strong alternative to weighted service allocation for hospital services with a relatively short inpatient stay. The use of inpatient day allocation would likely most closely reflect the indirect cost estimates obtained by the weighted service method.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
An expert system for estimating production rates and costs for hardwood group-selection harvests
Chris B. LeDoux; B. Gopalakrishnan; R. S. Pabba
2003-01-01
As forest managers shift their focus from stands to entire ecosystems alternative harvesting methods such as group selection are being used increasingly. Results of several field time and motion studies and simulation runs were incorporated into an expert system for estimating production rates and costs associated with harvests of group-selection units of various size...
Estimation of Glomerular Filtration Rate from Plasma Clearance of 51-Chromium Edetic Acid
Chantler, C.; Barratt, T. M.
1972-01-01
The glomerular filtration rate was estimated by a single compartment analysis of the rate of fall of plasma concentration of 51-chromium edetic acid after a single intravenous injection. This slope clearance consistently overestimated the simultaneously determined standard urinary clearance, but could be used to predict the latter with an accuracy of ±9% (95% confidence limits). The coefficient of variation of replicate estimates of the slope clearance in the same individual was 3·9%; thus two estimates of glomerular filtration rate by this technique which differ by 11% have a 95% probability of reflecting a genuine difference. The method requires an intravenous injection and blood samples at 2 and 4 hours; urine samples are not required. It is simple, safe, and precise, and is applicable to children. PMID:4625784
Influence of ECG sampling rate in fetal heart rate variability analysis.
De Jonckheere, J; Garabedian, C; Charlier, P; Champion, C; Servan-Schreiber, E; Storme, L; Debarge, V; Jeanne, M; Logier, R
2017-07-01
Fetal hypoxia results in a fetal blood acidosis (pH<;7.10). In such a situation, the fetus develops several adaptation mechanisms regulated by the autonomic nervous system. Many studies demonstrated significant changes in heart rate variability in hypoxic fetuses. So, fetal heart rate variability analysis could be of precious help for fetal hypoxia prediction. Commonly used fetal heart rate variability analysis methods have been shown to be sensitive to the ECG signal sampling rate. Indeed, a low sampling rate could induce variability in the heart beat detection which will alter the heart rate variability estimation. In this paper, we introduce an original fetal heart rate variability analysis method. We hypothesize that this method will be less sensitive to ECG sampling frequency changes than common heart rate variability analysis methods. We then compared the results of this new heart rate variability analysis method with two different sampling frequencies (250-1000 Hz).
Tamura, Koichiro; Tao, Qiqing; Kumar, Sudhir
2018-01-01
Abstract RelTime estimates divergence times by relaxing the assumption of a strict molecular clock in a phylogeny. It shows excellent performance in estimating divergence times for both simulated and empirical molecular sequence data sets in which evolutionary rates varied extensively throughout the tree. RelTime is computationally efficient and scales well with increasing size of data sets. Until now, however, RelTime has not had a formal mathematical foundation. Here, we show that the basis of the RelTime approach is a relative rate framework (RRF) that combines comparisons of evolutionary rates in sister lineages with the principle of minimum rate change between evolutionary lineages and their respective descendants. We present analytical solutions for estimating relative lineage rates and divergence times under RRF. We also discuss the relationship of RRF with other approaches, including the Bayesian framework. We conclude that RelTime will be useful for phylogenies with branch lengths derived not only from molecular data, but also morphological and biochemical traits. PMID:29893954
Non-invasive Fetal ECG Signal Quality Assessment for Multichannel Heart Rate Estimation.
Andreotti, Fernando; Graser, Felix; Malberg, Hagen; Zaunseder, Sebastian
2017-12-01
The noninvasive fetal ECG (NI-FECG) from abdominal recordings offers novel prospects for prenatal monitoring. However, NI-FECG signals are corrupted by various nonstationary noise sources, making the processing of abdominal recordings a challenging task. In this paper, we present an online approach that dynamically assess the quality of NI-FECG to improve fetal heart rate (FHR) estimation. Using a naive Bayes classifier, state-of-the-art and novel signal quality indices (SQIs), and an existing adaptive Kalman filter, FHR estimation was improved. For the purpose of training and validating the proposed methods, a large annotated private clinical dataset was used. The suggested classification scheme demonstrated an accuracy of Krippendorff's alpha in determining the overall quality of NI-FECG signals. The proposed Kalman filter outperformed alternative methods for FHR estimation achieving accuracy. The proposed algorithm was able to reliably reflect changes of signal quality and can be used in improving FHR estimation. NI-ECG signal quality estimation and multichannel information fusion are largely unexplored topics. Based on previous works, multichannel FHR estimation is a field that could strongly benefit from such methods. The developed SQI algorithms as well as resulting classifier were made available under a GNU GPL open-source license and contributed to the FECGSYN toolbox.
Garfield, R; Leu, C S
2000-06-01
Many reports on Iraq suggest that a rise in rates of death and disease have occurred since the Gulf War of January/February 1991 and the economic sanctions that followed it. Four preliminary models, based on unadjusted projections, were developed. A logistic regression model was then developed on the basis of six social variables in Iraq and comparable information from countries in the State of the World's Children report. Missing data were estimated for this model by a multiple imputation procedure. The final model depends on three socio-medical indicators: adult literacy, nutritional stunting of children under 5 years, and access to piped water. The model successfully predicted both the mortality rate in 1990, under stable conditions, and in 1991, following the Gulf War. For 1996, after 5 years of sanctions and prior to receipt of humanitarian food via the oil for food programme, this model shows mortality among children under 5 to have reached an estimated 87 per 1000, a rate last experienced more than 30 years ago. Accurate and timely estimates of mortality levels in developing countries are costly and require considerable methodological expertise. A rapid estimation technique like the one developed here may be a useful tool for quick and efficient estimation of mortality rates among under 5 year olds in countries where good mortality data are not routinely available. This is especially true for countries with complex humanitarian emergencies where information on mortality changes can guide interventions and the social stability to use standard demographic methods does not exist.
Elimination Rates of Dioxin Congeners in Former Chlorophenol Workers from Midland, Michigan
Collins, James J.; Bodner, Kenneth M.; Wilken, Michael; Bodnar, Catherine M.
2012-01-01
Background: Exposure reconstructions and risk assessments for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and other dioxins rely on estimates of elimination rates. Limited data are available on elimination rates for congeners other than TCDD. Objectives: We estimated apparent elimination rates using a simple first-order one-compartment model for selected dioxin congeners based on repeated blood sampling in a previously studied population. Methods: Blood samples collected from 56 former chlorophenol workers in 2004–2005 and again in 2010 were analyzed for dioxin congeners. We calculated the apparent elimination half-life in each individual for each dioxin congener and examined factors potentially influencing elimination rates and the impact of estimated ongoing background exposures on rate estimates. Results: Mean concentrations of all dioxin congeners in the sampled participants declined between sampling times. Median apparent half-lives of elimination based on changes in estimated mass in the body were generally consistent with previous estimates and ranged from 6.8 years (1,2,3,7,8,9-hexachlorodibenzo-p-dioxin) to 11.6 years (pentachlorodibenzo-p-dioxin), with a composite half-life of 9.3 years for TCDD toxic equivalents. None of the factors examined, including age, smoking status, body mass index or change in body mass index, initial measured concentration, or chloracne diagnosis, was consistently associated with the estimated elimination rates in this population. Inclusion of plausible estimates of ongoing background exposures decreased apparent half-lives by approximately 10%. Available concentration-dependent toxicokinetic models for TCDD underpredicted observed elimination rates for concentrations < 100 ppt. Conclusions: The estimated elimination rates from this relatively large serial sampling study can inform occupational and environmental exposure and serum evaluations for dioxin compounds. PMID:23063871
Verginelli, Iason; Pecoraro, Roberto; Baciocchi, Renato
2018-04-01
In this work, we introduce a screening method for the evaluation of the natural attenuation rates in the subsurface at sites contaminated by petroleum hydrocarbons. The method is based on the combination of the data obtained from standard source characterization with dynamic flux chambers measurements. The natural attenuation rates are calculated as difference between the flux of contaminants estimated with a non-reactive diffusive model starting from the concentrations of the contaminants detected in the source (soil and/or groundwater) and the effective emission rate of the contaminants measured using dynamic flux chambers installed at ground level. The reliability of this approach was tested in a contaminated site characterized by the presence of BTEX in soil and groundwater. Namely, the BTEX emission rates from the subsurface were measured in 4 seasonal campaigns using dynamic flux chambers installed in 14 sampling points. The comparison of measured fluxes with those predicted using a non-reactive diffusive model, starting from the source concentrations, showed that, in line with other recent studies, the modelling approach can overestimate the expected outdoor concentration of petroleum hydrocarbons even up to 4 orders of magnitude. On the other hand, by coupling the measured data with the fluxes estimated with the diffusive non-reactive model, it was possible to perform a mass balance to evaluate the natural attenuation loss rates of petroleum hydrocarbons during the migration from the source to ground level. Based on this comparison, the estimated BTEX loss rates in the test site were up to almost 0.5kg/year/m 2 . These rates are in line with the values reported in the recent literature for natural source zone depletion. In short, the method presented in this work can represent an easy-to-use and cost-effective option that can provide a further line of evidence of natural attenuation rates expected at contaminated sites. Copyright © 2017 Elsevier B.V. All rights reserved.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
Begy, R; Cosma, C; Timar, A
2009-08-01
This work presents a first estimation of the sedimentation rate for the Red Lake (Romania). The sediment accumulation rates were determined by two well-known methods for recent sediment dating: (210)Pb and (137)Cs methods. Both techniques implied used the gamma emission of the above-mentioned radionuclides. The (210)Pb and (137)Cs concentrations in the sediment were measured using a gamma spectrometer with a HpGe detector, Gamma-X type. Activities ranging from 41+/-7 to 135+/-34Bq/kg were found for (210)Pb and from 3+/-0.5 to 1054+/-150Bq/kg for (137)Cs. The sediment profile indicates acceleration in sedimentation rate in the last 18 years. Thus, the sedimentation process for the Red Lake can be divided in two periods, the last 18 years, and respectively, the period before that. Using the Constant Rate of (210)Pb Supply method values between 0.18+/-0.04 and 1.85+/-0.5g/cm(2) year (0.32+/-0.08 and 2.83+/-0.7cm/year) were obtained. Considering both periods, an average sedimentation rate of 0.87+/-0.17g/cm(2) year (1.17cm/year) was calculated. Considering an average depth of 5.41m for the lake and the sedimentation rate estimated for the last 18 years, it could be estimated that the lake will disappear in 195 years.
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Hu, Jia; Moore, David J P; Riveros-Iregui, Diego A; Burns, Sean P; Monson, Russell K
2010-03-01
*Understanding controls over plant-atmosphere CO(2) exchange is important for quantifying carbon budgets across a range of spatial and temporal scales. In this study, we used a simple approach to estimate whole-tree CO(2) assimilation rate (A(Tree)) in a subalpine forest ecosystem. *We analysed the carbon isotope ratio (delta(13)C) of extracted needle sugars and combined it with the daytime leaf-to-air vapor pressure deficit to estimate tree water-use efficiency (WUE). The estimated WUE was then combined with observations of tree transpiration rate (E) using sap flow techniques to estimate A(Tree). Estimates of A(Tree) for the three dominant tree species in the forest were combined with species distribution and tree size to estimate and gross primary productivity (GPP) using an ecosystem process model. *A sensitivity analysis showed that estimates of A(Tree) were more sensitive to dynamics in E than delta(13)C. At the ecosystem scale, the abundance of lodgepole pine trees influenced seasonal dynamics in GPP considerably more than Engelmann spruce and subalpine fir because of its greater sensitivity of E to seasonal climate variation. *The results provide the framework for a nondestructive method for estimating whole-tree carbon assimilation rate and ecosystem GPP over daily-to weekly time scales.
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Accounting for rate variation among lineages in comparative demographic analyses.
Hope, Andrew G; Ho, Simon Y W; Malaney, Jason L; Cook, Joseph A; Talbot, Sandra L
2014-09-01
Genetic analyses of contemporary populations can be used to estimate the demographic histories of species within an ecological community. Comparison of these demographic histories can shed light on community responses to past climatic events. However, species experience different rates of molecular evolution, and this presents a major obstacle to comparative demographic analyses. We address this problem by using a Bayesian relaxed-clock method to estimate the relative evolutionary rates of 22 small mammal taxa distributed across northwestern North America. We found that estimates of the relative molecular substitution rate for each taxon were consistent across the range of sampling schemes that we compared. Using three different reference rates, we rescaled the relative rates so that they could be used to estimate absolute evolutionary timescales. Accounting for rate variation among taxa led to temporal shifts in our skyline-plot estimates of demographic history, highlighting both uniform and idiosyncratic evolutionary responses to directional climate trends for distinct ecological subsets of the small mammal community. Our approach can be used in evolutionary analyses of populations from multiple species, including comparative demographic studies. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Refinements of nonuniform estimates of the rate of convergence in the CLT to a stable law
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloznyalis, M.
1994-10-25
In this paper we construct new nonuniform estimates for the rate of convergence to the strictly stable distribution with exponent {alpha} {element_of} [0, 2] in a finite-dimensional CLT. This paper is a continuation of [1,7]. The nonuniform estimates obtained here in terms of truncated pseudomoments (see Theorems 1, 2 below) have in certain cases a better order of decrease than the corresponding estimates [1, 7], where pseudomoments have been used. In the proofs of Theorems 1, 2 we have used basically the methods of [1, 7, 8].
Improving Video Based Heart Rate Monitoring.
Lin, Jian; Rozado, David; Duenser, Andreas
2015-01-01
Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.
Inferring HIV Escape Rates from Multi-Locus Genotype Data
Kessinger, Taylor A.; Perelson, Alan S.; Neher, Richard A.
2013-09-03
Cytotoxic T-lymphocytes (CTLs) recognize viral protein fragments displayed by major histocompatibility complex molecules on the surface of virally infected cells and generate an anti-viral response that can kill the infected cells. Virus variants whose protein fragments are not efficiently presented on infected cells or whose fragments are presented but not recognized by CTLs therefore have a competitive advantage and spread rapidly through the population. We present a method that allows a more robust estimation of these escape rates from serially sampled sequence data. The proposed method accounts for competition between multiple escapes by explicitly modeling the accumulation of escape mutationsmore » and the stochastic effects of rare multiple mutants. Applying our method to serially sampled HIV sequence data, we estimate rates of HIV escape that are substantially larger than those previously reported. The method can be extended to complex escapes that require compensatory mutations. We expect our method to be applicable in other contexts such as cancer evolution where time series data is also available.« less
Eash, D.A.
1996-01-01
Flood-plain and channel-aggradation rates were estimated at 10 bridge sites on the Iowa River upstream of Coralville Lake and at two bridge sites in the central part of the Skunk River Basin. Four measurement methods were used to quantify aggradation rates: (1) a dendrogeomorphic method that used tree-age data and sediment-deposition depths, (2) a bridge-opening cross-section method that compared historic and recent cross sections of bridge openings, (3) a stage-discharge rating-curve method that compared historic and recent stages for the 5-year flood discharge and the average discharge, and (4) nine sediment pads that were installed on the Iowa River flood plain at three bridge sites in the vicinity of Marshalltown. The sediment pads were installed prior to overbank flooding in 1993. Sediments deposited on the pads as a result of the 1993 flood ranged in depth from 0.004 to 2.95 feet. Measurement periods used to estimate average aggradation rates ranged from 1 to 98 years and varied among methods and sites. The highest aggradation rates calculated for the Iowa River Basin using the dendrogeomorphic and rating- curve measurement methods were for the State Highway 14 crossing at Marshalltown, where these highest rates were 0.045 and 0.124 feet per year, respectively. The highest aggradation rates calculated for the Skunk River Basin were for the U.S. Highway 63 crossing of the South Skunk River near Oskaloosa, where these highest rates were 0.051 and 0.298 feet per year, respectively.
NASA Astrophysics Data System (ADS)
Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez
2005-01-01
An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-01-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations. PMID:25558365
Hillslope soil erosion estimated from aerosol concentrations, North Halawa Valley, Oahu, Hawaii
Hill, B.R.; Fuller, C.C.; DeCarlo, E.H.
1997-01-01
Concentrations of aerosolic quartz and 137Cs were used to estimate rates of hillslope soil erosion during 1990-91 in the North Halawa Valley on the island of Oahu, Hawaii. Fluvial transport of quartz was estimated to be 6.1 Mg in 1990 and 14.9 Mg in 1991. Fluvial transport of 137Cs from North Halawa Valley was estimated to be 1.29 ?? 109 pCi in 1991. Results were used with quartz contents, 137Cs activities, and bulk densities of hillslope soils to compute rates of basinwide hillslope soil erosion ranging from 0.1 to 0.3 mm yr-1. These rates are within the range of previous estimates of denudation computed for drainage basins on Oahu. The aerosol-concentration approach, therefore, is a useful method for assessing basinwide soil erosion.
A smoothed residual based goodness-of-fit statistic for nest-survival models
Rodney X. Sturdivant; Jay J. Rotella; Robin E. Russell
2008-01-01
Estimating nest success and identifying important factors related to nest-survival rates is an essential goal for many wildlife researchers interested in understanding avian population dynamics. Advances in statistical methods have led to a number of estimation methods and approaches to modeling this problem. Recently developed models allow researchers to include a...
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D’Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O’Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O’Reilly, B.; O’Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wesels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2016-12-01
This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work we reported various rate estimates whose 90% confidence intervals fell in the range 2–600 Gpc‑3 yr‑1. Here we give details on our method and computations, including information about our search pipelines, a derivation of our likelihood function for the analysis, a description of the astrophysical search trigger distribution expected from merging BBHs, details on our computational methods, a description of the effects and our model for calibration uncertainty, and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.
Bender, R W; Cook, D E; Combs, D K
2016-07-01
Ruminal digestion of neutral detergent fiber (NDF) is affected in part by the proportion of NDF that is indigestible (iNDF), and the rate at which the potentially digestible NDF (pdNDF) is digested. Indigestible NDF in forages is commonly determined as the NDF residue remaining after long-term in situ or in vitro incubations. Rate of pdNDF digestion can be determined by measuring the degradation of NDF in ruminal in vitro or in situ incubations at multiple time points, and fitting the change in residual pdNDF by time with log-transformed linear first order or nonlinear mathematical treatments. The estimate of indigestible fiber is important because it sets the pool size of potentially digestible fiber, which in turn affects the estimate of the proportion of potentially digestible fiber remaining in the time series analysis. Our objective was to compare estimates of iNDF based on in vitro (IV) and in situ (IS) measurements at 2 fermentation end points (120 and 288h). Further objectives were to compare the subsequent rate, lag, and estimated total-tract NDF digestibility (TTNDFD) when iNDF from each method was used with a 7 time point in vitro incubation of NDF to model fiber digestion. Thirteen corn silage samples were dried and ground through a 1-mm screen in a Wiley mill. A 2×2 factorial trial was conducted to determine the effect of time of incubation and method of iNDF analysis on iNDF concentration; the 2 factors were method of iNDF analysis (IS vs. IV) and incubation time (120 vs. 288h). Four sample replicates were used, and approximately 0.5g/sample was weighed into each Ankom F 0285 bag (Ankom Technology, Macedon, NY; pore size=25 µm) for all techniques. The IV-120 had a higher estimate of iNDF (37.8% of NDF) than IS-120 (32.1% of NDF), IV-288 (31.2% of NDF), or IS-288 technique (25.7% of NDF). Each of the estimates of iNDF was then used to calculate the rate of degradation of pdNDF from a 7 time point in vitro incubation. When the IV-120 NDF residue was used, the subsequent rates of pdNDF digestion were fastest (2.8% h(-1)) but the estimate of lag was longest (10.3h), compared with when iNDF was based on the IS-120 or IV-288 NDF residues (rates of 2.3%h(-1) and 2.4%h(-1); lag times of 9.7 and 9.8 h, respectively). Rate of pdNDF degradation was slowest (2.1% h(-1)) when IS-288 NDF residue was used as the estimate of iNDF. The estimate of lag based on IS-288 (9.4h) was similar to lag estimates calculated when IS-120 or IV-288 were used as the estimate of iNDF. The TTNDFD estimates did not differ between treatments (35.5%), however, because differences in estimated pools of iNDF resulted in subsequent changes in rates and lag times of fiber digestion that tended to cancel out. Estimates of fiber digestion kinetic parameters and TTNDFD were similar when fit to either the linear or nonlinear fiber degradation models. All techniques also yielded estimates of iNDF that were higher than predicted iNDF based on the commonly used ratio of 2.4 × lignin. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A New Method for Assessing How Sensitivity and Specificity of Linkage Studies Affects Estimation
Moore, Cecilia L.; Amin, Janaki; Gidding, Heather F.; Law, Matthew G.
2014-01-01
Background While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described. Methods We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed. Discussion Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed. PMID:25068293
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
Modern methods for the quality management of high-rate melt solidification
NASA Astrophysics Data System (ADS)
Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.
2016-12-01
The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.
The 'Own Children' fertility estimation procedure: a reappraisal.
Avery, Christopher; St Clair, Travis; Levin, Michael; Hill, Kenneth
2013-07-01
The Full Birth History has become the dominant source of estimates of fertility levels and trends for countries lacking complete birth registration. An alternative, the 'Own Children' method, derives fertility estimates from household age distributions, but is now rarely used, partly because of concerns about its accuracy. We compared the estimates from these two procedures by applying them to 56 recent Demographic and Health Surveys. On average, 'Own Children' estimates of recent total fertility rates are 3 per cent lower than birth-history estimates. Much of this difference stems from selection bias in the collection of birth histories: women with more children are more likely to be interviewed. We conclude that full birth histories overestimate total fertility, and that the 'Own Children' method gives estimates of total fertility that may better reflect overall national fertility. We recommend the routine application of the 'Own Children' method to census and household survey data to estimate fertility levels and trends.
Lukas, J M; Hawkins, D M; Kinsel, M L; Reneau, J K
2005-11-01
The objective of this study was to examine the relationship between monthly Dairy Herd Improvement (DHI) subclinical mastitis and new infection rate estimates and daily bulk tank somatic cell count (SCC) summarized by statistical process control tools. Dairy Herd Improvement Association test-day subclinical mastitis and new infection rate estimates along with daily or every other day bulk tank SCC data were collected for 12 mo of 2003 from 275 Upper Midwest dairy herds. Herds were divided into 5 herd production categories. A linear score [LNS = ln(BTSCC/100,000)/0.693147 + 3] was calculated for each individual bulk tank SCC. For both the raw SCC and the transformed data, the mean and sigma were calculated using the statistical quality control individual measurement and moving range chart procedure of Statistical Analysis System. One hundred eighty-three herds of the 275 herds from the study data set were then randomly selected and the raw (method 1) and transformed (method 2) bulk tank SCC mean and sigma were used to develop models for predicting subclinical mastitis and new infection rate estimates. Herd production category was also included in all models as 5 dummy variables. Models were validated by calculating estimates of subclinical mastitis and new infection rates for the remaining 92 herds and plotting them against observed values of each of the dependents. Only herd production category and bulk tank SCC mean were significant and remained in the final models. High R2 values (0.83 and 0.81 for methods 1 and 2, respectively) indicated a strong correlation between the bulk tank SCC and herd's subclinical mastitis prevalence. The standard errors of the estimate were 4.02 and 4.28% for methods 1 and 2, respectively, and decreased with increasing herd production. As a case study, Shewhart Individual Measurement Charts were plotted from the bulk tank SCC to identify shifts in mastitis incidence. Four of 5 charts examined signaled a change in bulk tank SCC before the DHI test day identified the change in subclinical mastitis prevalence. It can be concluded that applying statistical process control tools to daily bulk tank SCC can be used to estimate subclinical mastitis prevalence in the herd and observe for change in the subclinical mastitis status. Single DHI test day estimates of new infection rate were insufficient to accurately describe its dynamics.
Bermingham, Jacqueline F; Chen, Yuen Y; McIntosh, Robert L; Wood, Andrew W
2014-04-01
Fluorescent intensity of the dye Rhodamine-B (Rho-B) decreases with increasing temperature. We show that in fresh rat brain tissue samples in a custom-made radiofrequency (RF) tissue exposure device, temperature rise due to RF radiation as measured by absorbed dye correlates well with temperature measured nearby by fiber optic probes. Estimates of rate of initial temperature rise (using both probe measurement and the dye method) accord well with estimates of local specific energy absorption rate (SAR). We also modeled the temperature characteristics of the exposure device using combined electromagnetic and finite-difference thermal modeling. Although there are some differences in the rate of cooling following cessation of RF exposure, there is reasonable agreement between modeling and both probe measurement and dye estimation of temperature. The dye method also permits measurement of regional temperature rise (due to RF). There is no clear evidence of local differential RF absorption, but further refinement of the method may be needed to fully clarify this issue. © 2014 Wiley Periodicals, Inc.
Infant mortality in the Marshall Islands.
Levy, S J; Booth, H
1988-12-01
Levy and Booth present previously unpublished infant mortality rates for the Marshall Islands. They use an indirect method to estimate infant mortality from the 1973 and 1980 censuses, then apply indirect and direct methods of estimation to data from the Marshall Islands Women's Health Survey of 1985. Comparing the results with estimates of infant mortality obtained from vital registration data enables them to estimate the extent of underregistration of infant deaths. The authors conclude that 1973 census appears to be the most valid information source. Direct estimates from the Women's Health Survey data suggest that infant mortality has increased since 1970-1974, whereas the indirect estimates indicate a decreasing trend in infant mortality rates, converging with the direct estimates in more recent years. In view of increased efforts to improve maternal and child health in the mid-1970s, the decreasing trend is plausible. It is impossible to estimate accurately infant mortality in the Marshall Islands during 1980-1984 from the available data. Estimates based on registration data for 1975-1979 are at least 40% too low. The authors speculate that the estimate of 33 deaths per 1000 live births obtained from registration data for 1984 is 40-50% too low. In round figures, a value of 60 deaths per 1000 may be taken as the final estimate for 1980-1984.
Lodwick, Rebecca K; Nakagawa, Fumiyo; van Sighem, Ard; Sabin, Caroline A; Phillips, Andrew N
2015-01-01
It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150-199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150-199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29-100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART.
A Simple Visual Estimation of Food Consumption in Carnivores
Potgieter, Katherine R.; Davies-Mostert, Harriet T.
2012-01-01
Belly-size ratings or belly scores are frequently used in carnivore research as a method of rating whether and how much an animal has eaten. This method provides only a rough ordinal measure of fullness and does not quantify the amount of food an animal has consumed. Here we present a method for estimating the amount of meat consumed by individual African wild dogs Lycaon pictus. We fed 0.5 kg pieces of meat to wild dogs being temporarily held in enclosures and measured the corresponding change in belly size using lateral side photographs taken perpendicular to the animal. The ratio of belly depth to body length was positively related to the mass of meat consumed and provided a useful estimate of the consumption. Similar relationships could be calculated to determine amounts consumed by other carnivores, thus providing a useful tool in the study of feeding behaviour. PMID:22567086
Molnár, Péter K; Klanjscek, Tin; Derocher, Andrew E; Obbard, Martyn E; Lewis, Mark A
2009-08-01
Many species experience large fluctuations in food availability and depend on energy from fat and protein stores for survival, reproduction and growth. Body condition and, more specifically, energy stores thus constitute key variables in the life history of many species. Several indices exist to quantify body condition but none can provide the amount of stored energy. To estimate energy stores in mammals, we propose a body composition model that differentiates between structure and storage of an animal. We develop and parameterize the model specifically for polar bears (Ursus maritimus Phipps) but all concepts are general and the model could be easily adapted to other mammals. The model provides predictive equations to estimate structural mass, storage mass and storage energy from an appropriately chosen measure of body length and total body mass. The model also provides a means to estimate basal metabolic rates from body length and consecutive measurements of total body mass. Model estimates of body composition, structural mass, storage mass and energy density of 970 polar bears from Hudson Bay were consistent with the life history and physiology of polar bears. Metabolic rate estimates of fasting adult males derived from the body composition model corresponded closely to theoretically expected and experimentally measured metabolic rates. Our method is simple, non-invasive and provides considerably more information on the energetic status of individuals than currently available methods.
A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.
Rodrigo, Marianito R
2016-01-01
The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
Seepage through a hazardous-waste trench cover
Healy, R.W.
1989-01-01
Water movement through a waste-trench cover under natural conditions at a low-level radioactive waste disposal site in northwestern Illinois was studied from July 1982 to June 1984, using tensiometers, a moisture probe, and meteorological instruments. Four methods were used to estimate seepage: the Darcy, zero-flux plane, surface-based water-budget, and groundwater-based water-budget methods. Annual seepage estimates ranged from 48 to 216 mm (5-23% of total precipitation), with most seepage occurring in spring. The Darcy method, although limited in accuracy by uncertainty in hydraulic conductivity, was capable of discretizing seepage in space and time and indicated that seepage varied by almost an order of magnitude across the width of the trench. Lowest seepage rates occurred near the center of the cover, where seepage was gradual. Highest rates occurred along the edge of the cover, where seepage was highly episodic, with 84% of the total there being traced to wetting fronts from 28 individual storms. Limitations of the zero-flux-plane method were severe enough for the method to be judged inappropriate for use in this study.Water movement through a waste-trench cover under natural conditions at a low-level radioactive waste disposal site in northwestern Illinois was studied from July 1982 to June 1984, using tensiometers, a moisture probe, and meteorological instruments. Four methods were used to estimate seepage: the Darcy, zero-flux plane, surface-based water-budget, and groundwater-based water-budget methods. Annual seepage estimates ranged from 48 to 216mm (5-23% of total precipitation), with most seepage occurring in spring. The Darcy method, although limited in accuracy by uncertainty in hydraulic conductivity, was capable of discretizing seepage in space and time and indicated that seepage varied by almost an order of magnitude across the width of the trench. Lowest seepage rates occurred near the center of the cover, where seepage was gradual. Highest rates occurred along the edge of the cover, where seepage was highly episodic, with 84% of the total there being traced to wetting fronts from 28 individual storms. Limitations of the zero-flux-plane method were severe enough for the method to be judged inappropriate for use in this study.
Dewji, S.; Bellamy, M.; Hertel, N.; ...
2015-03-25
The purpose of this study is to estimate dose rates that may result from exposure to patients who had been administered iodine-131 ( 131I) as part of medical therapy were calculated. These effective dose rate estimates were compared with simplified assumptions under United States Nuclear Regulatory Commission Regulatory Guide 8.39, which does not consider body tissue attenuation nor time-dependent redistribution and excretion of the administered 131I. Methods: Dose rates were estimated for members of the public potentially exposed to external irradiation from patients recently treated with 131I. Tissue attenuation and iodine biokinetics were considered in the patient in a largermore » comprehensive effort to improve external dose rate estimates. The external dose rate estimates are based on Monte Carlo simulations using the Phantom with Movable Arms and Legs (PIMAL), previously developed by Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission. PIMAL was employed to model the relative positions of the 131I patient and members of the public in three exposure scenarios: (1) traveling on a bus in a total of six seated or standing permutations, (2) two nursing home cases where a caregiver is seated at 30 cm from the patient’s bedside and a nursing home resident seated 250 cm away from the patient in an adjacent bed, and (3) two hotel cases where the patient and a guest are in adjacent rooms with beds on opposite sides of the common wall, with the patient and guest both in bed and either seated back-to-back or lying head to head. The biokinetic model predictions of the retention and distribution of 131I in the patient assumed a single voiding of urinary bladder contents that occurred during the trip at 2, 4, or 8 h after 131I administration for the public transportation cases, continuous first-order voiding for the nursing home cases, and regular periodic voiding at 4, 8, or 12 h after administration for the hotel room cases. Organ specific activities of 131I in the thyroid, bladder, and combined remaining tissues were calculated as a function of time after administration. Exposures to members of the public were considered for 131I patients with normal thyroid uptake (peak thyroid uptake of ~27% of administered 131I), differentiated thyroid cancer (DTC, 5% uptake), and hyperthyroidism (80% uptake). Results: The scenario with the patient seated behind the member of the public yielded the highest dose rate estimate of seated public transportation exposure cases. The dose rate to the adjacent room guest was highest for the exposure scenario in which the hotel guest and patient are seated by a factor of ~4 for the normal and differentiated thyroid cancer uptake cases and by a factor of ~3 for the hyperthyroid case. Conclusions: It was determined that for all modeled cases, the DTC case yielded the lowest external dose rates, whereas the hyperthyroid case yielded the highest dose rates. In estimating external dose to members of the public from patients with 131I therapy, consideration must be given to (patient- and case-specific) administered 131I activities and duration of exposure for a more complete estimate. The method implemented here included a detailed calculation model, which provides a means to determine dose rate estimates for a range of scenarios. Finally, the method was demonstrated for variations of three scenarios, showing how dose rates are expected to vary with uptake, voiding pattern, and patient location.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewji, S.; Bellamy, M.; Hertel, N.
The purpose of this study is to estimate dose rates that may result from exposure to patients who had been administered iodine-131 ( 131I) as part of medical therapy were calculated. These effective dose rate estimates were compared with simplified assumptions under United States Nuclear Regulatory Commission Regulatory Guide 8.39, which does not consider body tissue attenuation nor time-dependent redistribution and excretion of the administered 131I. Methods: Dose rates were estimated for members of the public potentially exposed to external irradiation from patients recently treated with 131I. Tissue attenuation and iodine biokinetics were considered in the patient in a largermore » comprehensive effort to improve external dose rate estimates. The external dose rate estimates are based on Monte Carlo simulations using the Phantom with Movable Arms and Legs (PIMAL), previously developed by Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission. PIMAL was employed to model the relative positions of the 131I patient and members of the public in three exposure scenarios: (1) traveling on a bus in a total of six seated or standing permutations, (2) two nursing home cases where a caregiver is seated at 30 cm from the patient’s bedside and a nursing home resident seated 250 cm away from the patient in an adjacent bed, and (3) two hotel cases where the patient and a guest are in adjacent rooms with beds on opposite sides of the common wall, with the patient and guest both in bed and either seated back-to-back or lying head to head. The biokinetic model predictions of the retention and distribution of 131I in the patient assumed a single voiding of urinary bladder contents that occurred during the trip at 2, 4, or 8 h after 131I administration for the public transportation cases, continuous first-order voiding for the nursing home cases, and regular periodic voiding at 4, 8, or 12 h after administration for the hotel room cases. Organ specific activities of 131I in the thyroid, bladder, and combined remaining tissues were calculated as a function of time after administration. Exposures to members of the public were considered for 131I patients with normal thyroid uptake (peak thyroid uptake of ~27% of administered 131I), differentiated thyroid cancer (DTC, 5% uptake), and hyperthyroidism (80% uptake). Results: The scenario with the patient seated behind the member of the public yielded the highest dose rate estimate of seated public transportation exposure cases. The dose rate to the adjacent room guest was highest for the exposure scenario in which the hotel guest and patient are seated by a factor of ~4 for the normal and differentiated thyroid cancer uptake cases and by a factor of ~3 for the hyperthyroid case. Conclusions: It was determined that for all modeled cases, the DTC case yielded the lowest external dose rates, whereas the hyperthyroid case yielded the highest dose rates. In estimating external dose to members of the public from patients with 131I therapy, consideration must be given to (patient- and case-specific) administered 131I activities and duration of exposure for a more complete estimate. The method implemented here included a detailed calculation model, which provides a means to determine dose rate estimates for a range of scenarios. Finally, the method was demonstrated for variations of three scenarios, showing how dose rates are expected to vary with uptake, voiding pattern, and patient location.« less
Alonso, Jordi; Vilagut, Gemma; Chatterji, Somnath; Heeringa, Steven; Schoenbaum, Michael; Üstün, T. Bedirhan; Rojas-Farreras, Sonia; Angermeyer, Matthias; Bromet, Evelyn; Bruffaerts, Ronny; de Girolamo, Giovanni; Gureje, Oye; Haro, Josep Maria; Karam, Aimee N.; Kovess, Viviane; Levinson, Daphna; Liu, Zhaorui; Mora, Maria Elena Medina; Ormel, J.; Posada-Villa, Jose; Uda, Hidenori; Kessler, Ronald C.
2010-01-01
Background The methodology commonly used to estimate disease burden, featuring ratings of severity of individual conditions, has been criticized for ignoring comorbidity. A methodology that addresses this problem is proposed and illustrated here with data from the WHO World Mental Health Surveys. Although the analysis is based on self-reports about one’s own conditions in a community survey, the logic applies equally well to analysis of hypothetical vignettes describing comorbid condition profiles. Methods Face-to-face interviews in 13 countries (six developing, nine developed; n = 31,067; response rate = 69.6%) assessed 10 classes of chronic physical and 9 of mental conditions. A visual analog scale (VAS) was used to assess overall perceived health. Multiple regression analysis with interactions for comorbidity was used to estimate associations of conditions with VAS. Simulation was used to estimate condition-specific effects. Results The best-fitting model included condition main effects and interactions of types by numbers of conditions. Neurological conditions, insomnia, and major depression were rated most severe. Adjustment for comorbidity reduced condition-specific estimates with substantial between-condition variation (.24–.70 ratios of condition-specific estimates with and without adjustment for comorbidity). The societal-level burden rankings were quite different from the individual-level rankings, with the highest societal-level rankings associated with conditions having high prevalence rather than high individual-level severity. Conclusions Plausible estimates of disorder-specific effects on VAS can be obtained using methods that adjust for comorbidity. These adjustments substantially influence condition-specific ratings. PMID:20553636
Exchange-Mediated Contrast Agents for Spin-Lock Imaging
Cobb, Jared G.; Xie, Jingping; Li, Ke; Gochberg, Daniel F.; Gore, John C.
2011-01-01
Measurements of relaxation rates in the rotating frame with spin-locking (SL) techniques are sensitive to substances with exchanging protons with appropriate chemical shifts. We develop a novel approach to exchange rate selective imaging based on measured T1ρ dispersion with applied locking field strength, and demonstrate the method on samples containing the X-ray contrast agent Iohexol (IO) with and without cross-linked bovine serum albumin (BSA). T1ρ dispersion of water in the phantoms was measured with a Varian 9.4T magnet by an on-resonance SL pulse with fast spin-echo readout, and the results used to estimate exchange rates. The IO phantom alone gave a fitted exchange rate of ~1 kHz, BSA alone was ~11 kHz, and in combination gave rates in between. By using these estimated rates, we demonstrate how a novel SL imaging method may be used to enhance contrast due to the presence of a contrast agent whose protons have specific exchange rates. PMID:21954094
Genomic clocks and evolutionary timescales
NASA Technical Reports Server (NTRS)
Blair Hedges, S.; Kumar, Sudhir
2003-01-01
For decades, molecular clocks have helped to illuminate the evolutionary timescale of life, but now genomic data pose a challenge for time estimation methods. It is unclear how to integrate data from many genes, each potentially evolving under a different model of substitution and at a different rate. Current methods can be grouped by the way the data are handled (genes considered separately or combined into a 'supergene') and the way gene-specific rate models are applied (global versus local clock). There are advantages and disadvantages to each of these approaches, and the optimal method has not yet emerged. Fortunately, time estimates inferred using many genes or proteins have greater precision and appear to be robust to different approaches.
NASA Astrophysics Data System (ADS)
Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong
2017-02-01
Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.
Modelling and assessment of accidental oil release from damaged subsea pipelines.
Li, Xinhong; Chen, Guoming; Zhu, Hongwei
2017-10-15
This paper develops a 3D, transient, mathematical model to estimate the oil release rate and simulate the oil dispersion behavior. The Euler-Euler method is used to estimate the subsea oil release rate, while the Eulerian-Lagrangian method is employed to track the migration trajectory of oil droplets. This model accounts for the quantitative effect of backpressure and hole size on oil release rate, and the influence of oil release rate, oil density, current speed, water depth and leakage position on oil migration is also investigated in this paper. Eventually, the results, e.g. transient release rate of oil, the rise time of oil and dispersion distance are determined by above-mentioned model, and the oil release and dispersion behavior under different scenarios is revealed. Essentially, the assessment results could provide a useful guidance for detection of leakage positon and placement of oil containment boom. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nunes, Rita G; Hajnal, Joseph V
2018-06-01
Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
QUESPOWR MRI: QUantification of Exchange as a function of Saturation Power On the Water Resonance
Randtke, Edward A.; Pagel, Mark D.; Cárdenas-Rodríguez, Julio
2018-01-01
QUantification of Exchange as a function of Saturation Power On the Water Resonance (QUESPOWR) MRI is a new method that can estimate chemical exchange rates. This method acquires a series of OPARACHEE MRI acquisitions with a range of RF powers for the WALTZ16* pulse train, which are applied on the water resonance. A QUESPOWR plot can be generated from the power dependence of the % water signal, which is similar to a QUESP plot that is generated from CEST MRI acquisition methods with RF saturation applied off-resonance from water. A QUESPOWR plot can be quantitatively analyzed using linear fitting methods to provide estimates of average chemical exchange rates. Analyses of the shapes of QUESPOWR plots can also be used to estimate relative differences in average chemical exchange rates and concentrations of biomolecules. The performance of QUESPOWR MRI was assessed via simulations, an in vitro study with iopamidol, and an in vivo study with a mouse model of mammary carcinoma. The results showed that QUESPOWR MRI is especially sensitive to chemical exchange between water and biomolecules that have intermediate to fast chemical exchange rates and chemical shifts that are close to water, which are notoriously difficult to assess with other CEST MRI methods. In addition, in vivo QUESPOWR MRI detected acidic tumor tissues relative to normal tissues that are pH-neutral, and therefore may be a new paradigm for tumor detection with MRI. PMID:27404128
Statistics of rain-rate estimates for a single attenuating radar
NASA Technical Reports Server (NTRS)
Meneghini, R.
1976-01-01
The effects of fluctuations in return power and the rain-rate/reflectivity relationship, are included in the estimates, as well as errors introduced in the attempt to recover the unattenuated return power. In addition to the Hitschfeld-Bordan correction, two alternative techniques are considered. The performance of the radar is shown to be dependent on the method by which attenuation correction is made.
Uemura, Osamu; Iwata, Naoyuki; Nagai, Takuhito; Yamakawa, Satoshi; Hibino, Satoshi; Yamamoto, Masaki; Nakano, Masaru; Tanaka, Kazuki
2018-05-01
To determine the optimal method of evaluating kidney function in patients with thyroid dysfunction, this study compared the estimated glomerular filtration rate derived from serum creatinine, cystatin C, or β2-microglobulin with inulin or creatinine clearance in two pediatric patients, one with hypothyroidism and the other with hyperthyroidism. It was observed that the kidney function decreased in a hypothyroid child and enhanced in a hyperthyroid child, with their kidney function becoming normalized by treatment with drugs, which normalized their thyroid function. Kidney function cannot be accurately evaluated using cystatin C-based or β2-microglobulin-based estimated glomerular filtration rate in patients with thyroid dysfunction, as these tests overestimated glomerular filtration rate in a patient with hypothyroidism and underestimated glomerular filtration rate in a patient with hyperthyroidism, perhaps through a metabolic rate-mediated mechanism. In both our patients, 24-h urinary creatinine secretion was identical before and after treatment, suggesting that creatinine production is not altered in patients with thyroid dysfunction. Therefore, kidney function in patients with thyroid dysfunction should be evaluated using creatinine-based estimated glomerular filtration rate.
Local extinction and turnover rates at the edge and interior of species' ranges
Doherty, P.F.; Boulinier, T.; James., D.
2003-01-01
One hypothesis for the maintenance of the edge of a species' range suggests that more central (and abundant) populations are relatively stable and edge populations are less stable with increased local extinction and turnover rates. To date, estimates of such metrics are equivocal due to design and analysis flaws. Apparent increased estimates of extinction and turnover rates at the edge of range, versus the interior, could be a function of decreased detection probabilities alone, and not of a biological process. We estimated extinction and turnover rates for species at the interiors and edges of their ranges using an approach which incorporates potential heterogeneity in species detection probabilities. Extinction rates were higher at the edges (0.17 ?? 0.03 []) than in the interiors (0.04 ?? 0.01), as was turnover. Without taking the probability of detection into account these differences would be artificially magnified. Knowledge of extinction and turnover rates is essential in furthering our understanding of range dynamics, and in directing conservation efforts. This study further illustrates the practical application of methods proposed recently for estimating extinction rates and other community dynamic parameters.
Local extinction and turnover rates at the edge and interior of species' ranges
Doherty, P.F.; Boulinier, T.; Nichols, J.D.
2003-01-01
One hypothesis for the maintenance of the edge of a species' range suggests that more central (and abundant) populations are relatively stable and edge populations are less stable with increased local extinction and turnover rates. To date, estimates of such metrics are equivocal due to design and analysis flaws. Apparent increased estimates of extinction and turnover rates at the edge of range, versus the interior, could be a function of decreased detection probabilities alone, and not of a biological process. We estimated extinction and turnover rates for species at the interiors and edges of their ranges using an approach which incorporates potential heterogeneity in species detection probabilities. Extinction rates were higher at the edges (0.17 ' 0.03 [SE]) than in the interiors (0.04 ' 0.01), as was turnover. Without taking the probability of detection into account these differences would be artificially magnified. Knowledge of extinction and turnover rates is essential in furthering our understanding of range dynamics, and in directing conservation efforts. This study further illustrates the practical application of methods proposed recently for estimating extinction rates and other community dynamic parameters.
Impact of transverse and longitudinal dispersion on first-order degradation rate constant estimation
NASA Astrophysics Data System (ADS)
Stenback, Greg A.; Ong, Say Kee; Rogers, Shane W.; Kjartanson, Bruce H.
2004-09-01
A two-dimensional analytical model is employed for estimating the first-order degradation rate constant of hydrophobic organic compounds (HOCs) in contaminated groundwater under steady-state conditions. The model may utilize all aqueous concentration data collected downgradient of a source area, but does not require that any data be collected along the plume centerline. Using a least squares fit of the model to aqueous concentrations measured in monitoring wells, degradation rate constants were estimated at a former manufactured gas plant (FMGP) site in the Midwest U.S. The estimated degradation rate constants are 0.0014, 0.0034, 0.0031, 0.0019, and 0.0053 day -1 for acenaphthene, naphthalene, benzene, ethylbenzene, and toluene, respectively. These estimated rate constants were as low as one-half those estimated with the one-dimensional (centerline) approach of Buscheck and Alcantar [Buscheck, T.E., Alcantar, C.M., 1995. Regression techniques and analytical solutions to demonstrate intrinsic bioremediation. In: Hinchee, R.E., Wilson, J.T., Downey, D.C. (Eds.), Intrinsic Bioremediation, Battelle Press, Columbus, OH, pp. 109-116] which does not account for transverse dispersivity. Varying the transverse and longitudinal dispersivity values over one order of magnitude for toluene data obtained from the FMGP site resulted in nearly a threefold variation in the estimated degradation rate constant—highlighting the importance of reliable estimates of the dispersion coefficients for obtaining reasonable estimates of the degradation rate constants. These results have significant implications for decision making and site management where overestimation of a degradation rate may result in remediation times and bioconversion factors that exceed expectations. For a complex source area or non-steady-state plume, a superposition of analytical models that incorporate longitudinal and transverse dispersion and time may be used at sites where the centerline method would not be applicable.
Information's role in the estimation of chaotic signals
NASA Astrophysics Data System (ADS)
Drake, Daniel Fred
1998-11-01
Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.
Estimating malaria transmission from humans to mosquitoes in a noisy landscape.
Reiner, Robert C; Guerra, Carlos; Donnelly, Martin J; Bousema, Teun; Drakeley, Chris; Smith, David L
2015-10-06
A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the 'colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. © 2015 The Authors.
NASA Technical Reports Server (NTRS)
DeMore, W.B.
1996-01-01
Relative rate experiments are used to measure rate constants and temperature dependencies of the reactions of OH with CH3F (41), CH2FCl (31), CH2BrCl (30B1), CH2Br2 (3OB2), CHBr3 (2OB3), CF2BrCHFCl (123aBl(alpha)), and CF2ClCHCl2 (122). Rate constants for additional compounds of these types are estimated using an empirical rate constant estimation method which is based on measured rate constants for a wide range of halocarbons. The experimental data are combined with the estimated and previously reported rate constants to illustrate the effects of F, Cl, and Br substitution on OH rate constants for a series of 19 halomethanes and 25 haloethanes. Application of the estimation technique is further illustrated for some higher hydrofluorocarbons (HFCs), including CHF2CF2CF2CF2H (338pcc), CF3CHFCHFCF2CF3 (43-10mee), CF3CH2CH2CF3 (356ffa), CF3CH2CF2CH2CF3 (458mfcf), CF3CH2CHF2 (245fa), and CF3CH2CF2CH3 (365mfc). The predictions are compared with literature data for these compounds.
The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-07-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
NASA Technical Reports Server (NTRS)
VanArsdall, John C.
2005-01-01
The Hubble Space Telescope (HST) normally requires three gyroscopes for three-axis rate control. The loss of the Space Shuttle Columbia on STS-107 resulted in the cancellation of a shuttle-based HST Servicing Mission 4. Therefore, HST must operate using the on-board hardware until an alternate means of servicing can be accomplished. The probability of gyro failure indicates that fewer than three gyros will be operable before any servicing mission can be performe& To mitigate this, and to extend the HST life expectancy, a rate estimation and control algorithm was developed that requires two gyros to measure rate about two axes, with the remaining axis rate estimated using one of three alternate sensors. Three-axis magnetometers (MSS) are used for coarse rate estimation during large maneuvers and during occultations of other sensors. Fixed-Head Star Trackers (FHSTs) are used for rate estimation during safe mode recovery and during transition to science operations. Fine rate estimation during science operations is performed using the Fine Guidance Sensors (FGSs). The FHST mode (T2G) relies on star vectors as measured by the FHSTs to estimate vehicle rate about the axis not measured by the gyros. Since the FHSTs were not designed to estimate body rate, this method involves a unique set of problems that had to be overcome in the final design, such as the effect of FHST break tracks and moving targets on rate estimation. The solutions to these problems, as well as a detailed description of the design and implementation of the rate estimation are presented Also included are the time domain and frequency domain analysis of the T2G control law. A high fidelity HST simulator (HSTSIM) was used to verify T2G performance prior to on-orbit use. Results of these simulations are also presented. Finally, analysis of actual T2G on-orbit test results is presented for design validation.
Monitoring inter-group encounters in wilderness
Alan E. Watson; Rich Cronn; Neal A. Christensen
1998-01-01
Many managers face the challenge of monitoring rates of visitor encounters in wilderness. This study (1) provides estimates of encounter rates through use of several monitoring methods, (2) determines the relationship between the various measures of encounter rates, and (3) determines the relationship between various indirect predictors of encounter rates and actual...
Norton, Andrew S.; Diefenbach, Duane R.; Wallingford, Bret D.; Rosenberry, Christopher S.
2012-01-01
The performance of 2 popular methods that use age-at-harvest data to estimate abundance of white-tailed deer is contingent on assumptions about variation in estimates of subadult (1.5 yr old) and adult (≥2.5 yr old) male harvest rates. Auxiliary data (e.g., estimates of survival or harvest rates from radiocollared animals) can be used to relax some assumptions, but unless these population parameters exhibit limited temporal or spatial variation, these auxiliary data may not improve accuracy. Unfortunately maintaining sufficient sample sizes of radiocollared deer for parameter estimation in every wildlife management unit (WMU) is not feasible for most state agencies. We monitored the fates of 397 subadult and 225 adult male white-tailed deer across 4 WMUs from 2002 to 2008 using radio telemetry. We investigated spatial and temporal variation in harvest rates and investigated covariates related to the patterns observed. We found that most variation in harvest rates was explained spatially and that adult harvest rates (0.36–0.69) were more variable among study areas than subadult harvest rates (0.26–0.42). We found that hunter effort during the archery and firearms season best explained variation in harvest rates of adult males among WMUs, whereas hunter effort during only the firearms season best explained harvest rates for subadult males. From a population estimation perspective, it is advantageous that most variation was spatial and explained by a readily obtained covariate (hunter effort). However, harvest rates may vary if hunting regulations or hunter behavior change, requiring additional field studies to obtain accurate estimates of harvest rates.
Age-specific survival estimates of King Eiders derived from satellite telemetry
Oppel, Steffen; Powell, Abby N.
2010-01-01
Age- and sex-specific survival and dispersal are important components in the dynamics and genetic structure of bird populations. For many avian taxa survival rates at the adult and juvenile life stages differ, but in long-lived species juveniles' survival is logistically challenging to study. We present the first estimates of hatch-year annual survival rates for a sea duck, the King Eider (Somateria spectabilis), estimated from satellite telemetry. From 2006 to 2008 we equipped pre-fiedging King Eiders with satellite transmitters on breeding grounds in Alaska and estimated annual survival rates during their first 2 years of life with known-fate models. We compared those estimates to survival rates of adults marked in the same area from 2002 to 2008. Hatch-year survival varied by season during the first year of life, and model-averaged annual survival rate was 0.67 (95% CI: 0.48–0.80). We did not record any mortality during the second year and were therefore unable to estimate second-year survival rate. Adults' survival rate was constant through the year (0.94, 95% CI: 0.86–0.97). No birds appeared to breed during their second summer. While 88% of females with an active transmitter (n = 9) returned to their natal area at the age of 2 years, none of the 2-year old males (n = 3) did. This pattern indicates that females' natal philopatry is high and suggests that males' higher rates of dispersal may account for sex-specific differences in apparent survival rates of juvenile sea ducks when estimated with mark—recapture methods.
The method for homography estimation between two planes based on lines and points
NASA Astrophysics Data System (ADS)
Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry
2018-04-01
The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.
Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.
Chatterjee, Abhijit; Voter, Arthur F
2010-05-21
We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.
Estimation of mortality for stage-structured zooplankton populations: What is to be done?
NASA Astrophysics Data System (ADS)
Ohman, Mark D.
2012-05-01
Estimation of zooplankton mortality rates in field populations is a challenging task that some contend is inherently intractable. This paper examines several of the objections that are commonly raised to efforts to estimate mortality. We find that there are circumstances in the field where it is possible to sequentially sample the same population and to resolve biologically caused mortality, albeit with error. Precision can be improved with sampling directed by knowledge of the physical structure of the water column, combined with adequate sample replication. Intercalibration of sampling methods can make it possible to sample across the life history in a quantitative manner. Rates of development can be constrained by laboratory-based estimates of stage durations from temperature- and food-dependent functions, mesocosm studies of molting rates, or approximation of development rates from growth rates, combined with the vertical distributions of organisms in relation to food and temperature gradients. Careful design of field studies guided by the assumptions of specific estimation models can lead to satisfactory mortality estimates, but model uncertainty also needs to be quantified. We highlight additional issues requiring attention to further advance the field, including the need for linked cooperative studies of the rates and causes of mortality of co-occurring holozooplankton and ichthyoplankton.
An extensible framework for capturing solvent effects in computer generated kinetic models.
Jalan, Amrit; West, Richard H; Green, William H
2013-03-14
Detailed kinetic models provide useful mechanistic insight into a chemical system. Manual construction of such models is laborious and error-prone, which has led to the development of automated methods for exploring chemical pathways. These methods rely on fast, high-throughput estimation of species thermochemistry and kinetic parameters. In this paper, we present a methodology for extending automatic mechanism generation to solution phase systems which requires estimation of solvent effects on reaction rates and equilibria. The linear solvation energy relationship (LSER) method of Abraham and co-workers is combined with Mintz correlations to estimate ΔG(solv)°(T) in over 30 solvents using solute descriptors estimated from group additivity. Simple corrections are found to be adequate for the treatment of radical sites, as suggested by comparison with known experimental data. The performance of scaled particle theory expressions for enthalpic-entropic decomposition of ΔG(solv)°(T) is also presented along with the associated computational issues. Similar high-throughput methods for solvent effects on free-radical kinetics are only available for a handful of reactions due to lack of reliable experimental data, and continuum dielectric calculations offer an alternative method for their estimation. For illustration, we model liquid phase oxidation of tetralin in different solvents computing the solvent dependence for ROO• + ROO• and ROO• + solvent reactions using polarizable continuum quantum chemistry methods. The resulting kinetic models show an increase in oxidation rate with solvent polarity, consistent with experiment. Further work needed to make this approach more generally useful is outlined.
State estimation for networked control systems using fixed data rates
NASA Astrophysics Data System (ADS)
Liu, Qing-Quan; Jin, Fang
2017-07-01
This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.
Estimating Children's Soil/Dust Ingestion Rates through ...
Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/du
Estimating the rate of biological introductions: Lessepsian fishes in the Mediterranean.
Belmaker, Jonathan; Brokovich, Eran; China, Victor; Golani, Daniel; Kiflawi, Moshe
2009-04-01
Sampling issues preclude the direct use of the discovery rate of exotic species as a robust estimate of their rate of introduction. Recently, a method was advanced that allows maximum-likelihood estimation of both the observational probability and the introduction rate from the discovery record. Here, we propose an alternative approach that utilizes the discovery record of native species to control for sampling effort. Implemented in a Bayesian framework using Markov chain Monte Carlo simulations, the approach provides estimates of the rate of introduction of the exotic species, and of additional parameters such as the size of the species pool from which they are drawn. We illustrate the approach using Red Sea fishes recorded in the eastern Mediterranean, after crossing the Suez Canal, and show that the two approaches may lead to different conclusions. The analytical framework is highly flexible and could provide a basis for easy modification to other systems for which first-sighting data on native and introduced species are available.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Covariance Matrix Estimation for Massive MIMO
NASA Astrophysics Data System (ADS)
Upadhya, Karthik; Vorobyov, Sergiy A.
2018-04-01
We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
System and method for motor speed estimation of an electric motor
Lu, Bin [Kenosha, WI; Yan, Ting [Brookfield, WI; Luebke, Charles John [Sussex, WI; Sharma, Santosh Kumar [Viman Nagar, IN
2012-06-19
A system and method for a motor management system includes a computer readable storage medium and a processing unit. The processing unit configured to determine a voltage value of a voltage input to an alternating current (AC) motor, determine a frequency value of at least one of a voltage input and a current input to the AC motor, determine a load value from the AC motor, and access a set of motor nameplate data, where the set of motor nameplate data includes a rated power, a rated speed, a rated frequency, and a rated voltage of the AC motor. The processing unit is also configured to estimate a motor speed based on the voltage value, the frequency value, the load value, and the set of nameplate data and also store the motor speed on the computer readable storage medium.
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
Comparison of methods for estimating the spread of a non-indigenous species
Patrick C. Tobin; Andrew M. Liebhold; E. Anderson Roberts
2007-01-01
Aim: To compare different quantitative approaches for estimating rates of spread in the exotic species gypsy moth, Lymantria dispar L., using county-level presence/absence data and spatially extensive trapping grids. Location: USA. Methods: We used county-level presence/absence records of the gypsy moth?s distribution in the USA, which are available beginning in 1900,...
ERIC Educational Resources Information Center
Koskey, Kristin L. K.; Stewart, Victoria C.
2014-01-01
This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Freeman, Vincent L; Boylan, Emma E; Pugach, Oksana; Mclafferty, Sara L; Tossas-Milligan, Katherine Y; Watson, Karriem S; Winn, Robert A
2017-10-01
To address locally relevant cancer-related health issues, health departments frequently need data beyond that contained in standard census area-based statistics. We describe a geographic information system-based method for calculating age-standardized cancer incidence rates in non-census defined geographical areas using publically available data. Aggregated records of cancer cases diagnosed from 2009 through 2013 in each of Chicago's 77 census-defined community areas were obtained from the Illinois State Cancer Registry. Areal interpolation through dasymetric mapping of census blocks was used to redistribute populations and case counts from community areas to Chicago's 50 politically defined aldermanic wards, and ward-level age-standardized 5-year cumulative incidence rates were calculated. Potential errors in redistributing populations between geographies were limited to <1.5% of the total population, and agreement between our ward population estimates and those from a frequently cited reference set of estimates was high (Pearson correlation r = 0.99, mean difference = -4 persons). A map overlay of safety-net primary care clinic locations and ward-level incidence rates for advanced-staged cancers revealed potential pathways for prevention. Areal interpolation through dasymetric mapping can estimate cancer rates in non-census defined geographies. This can address gaps in local cancer-related health data, inform health resource advocacy, and guide community-centered cancer prevention and control.
Estimating aboveground biomass of mariola (Parthenium incanum) from plant dimensions
Carlos Villalobos
2007-01-01
The distribution and abundance of plant biomass in space and time are important properties of rangeland ecosystem. Land managers and researchers require reliable shrub weight estimates to evaluate site productivity, food abundance, treatment effects, and stocking rates. Rapid, nondestructive methods are needed to estimate shrub biomass in semi-arid ecosystems. Shrub...
1D diffusion models may be used to estimate rates of production and consumption of dissolved metabolites in marine sediments, but are applied less often to the solid phase. Here we used a numerical inverse method to estimate solid phase Fe(III) and Fe(II) consumption and product...
Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring
ERIC Educational Resources Information Center
Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.
2015-01-01
Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…
45 CFR 284.11 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... METHODOLOGY FOR DETERMINING WHETHER AN INCREASE IN A STATE OR TERRITORY'S CHILD POVERTY RATE IS THE RESULT OF... estimating the number and percentage of children in poverty in each State. These methods may include national estimates based on the Current Population Survey; the Small Area Income and Poverty Estimates; the annual...
Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System
NASA Technical Reports Server (NTRS)
Karlgaard, Chris; Schoenenberger, Mark
2017-01-01
This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.
GONe: Software for estimating effective population size in species with generational overlap
Coombs, J.A.; Letcher, B.H.; Nislow, K.H.
2012-01-01
GONe is a user-friendly, Windows-based program for estimating effective size (N e) in populations with overlapping generations. It uses the Jorde-Ryman modification to the temporal method to account for age structure in populations. This method requires estimates of age-specific survival and birth rate and allele frequencies measured in two or more consecutive cohorts. Allele frequencies are acquired by reading in genotypic data from files formatted for either GENEPOP or TEMPOFS. For each interval between consecutive cohorts, N e is estimated at each locus and over all loci. Furthermore, N e estimates are output for three different genetic drift estimators (F s, F c and F k). Confidence intervals are derived from a chi-square distribution with degrees of freedom equal to the number of independent alleles. GONe has been validated over a wide range of N e values, and for scenarios where survival and birth rates differ between sexes, sex ratios are unequal and reproductive variances differ. GONe is freely available for download at. ?? 2011 Blackwell Publishing Ltd.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector
2018-01-01
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644
Modeling and estimating the jump risk of exchange rates: Applications to RMB
NASA Astrophysics Data System (ADS)
Wang, Yiming; Tong, Hanfei
2008-11-01
In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.
[Estimation of infant and child mortality in the eastern provinces of Cuba].
Gonzalez, G; Herrera, L
1986-01-01
An estimate of infant and child mortality in the eastern provinces of Cuba is presented using the Brass method as adapted by Trussell. "Estimations by urban and rural zones are also performed within the provinces studied, and results are compared with those possible to obtain by continuous statistics. Results obtained show that in the eastern [part] of the country Holguin and Guantanamo are the provinces with highest infantile mortality rates, and the lowest rates correspond to Granma, followed by Santiago de Cuba." (SUMMARY IN ENG AND FRE) excerpt
Association Between Connecticut’s Permit-to-Purchase Handgun Law and Homicides
Rudolph, Kara E.; Stuart, Elizabeth A.; Vernick, Jon S.
2015-01-01
Objectives. We sought to estimate the effect of Connecticut’s implementation of a handgun permit-to-purchase law in October 1995 on subsequent homicides. Methods. Using the synthetic control method, we compared Connecticut’s homicide rates after the law’s implementation to rates we would have expected had the law not been implemented. To estimate the counterfactual, we used longitudinal data from a weighted combination of comparison states identified based on the ability of their prelaw homicide trends and covariates to predict prelaw homicide trends in Connecticut. Results. We estimated that the law was associated with a 40% reduction in Connecticut’s firearm homicide rates during the first 10 years that the law was in place. By contrast, there was no evidence for a reduction in nonfirearm homicides. Conclusions. Consistent with prior research, this study demonstrated that Connecticut’s handgun permit-to-purchase law was associated with a subsequent reduction in homicide rates. As would be expected if the law drove the reduction, the policy’s effects were only evident for homicides committed with firearms. PMID:26066959
Parameter estimation in tree graph metabolic networks.
Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J
2016-01-01
We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954
Estimating the attack rate of pregnancy-associated listeriosis during a large outbreak.
Imanishi, Maho; Routh, Janell A; Klaber, Marigny; Gu, Weidong; Vanselow, Michelle S; Jackson, Kelly A; Sullivan-Chang, Loretta; Heinrichs, Gretchen; Jain, Neena; Albanese, Bernadette; Callaghan, William M; Mahon, Barbara E; Silk, Benjamin J
2015-01-01
In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500-12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure.
Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz
2015-01-01
Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.
2017-10-01
A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.
NASA Astrophysics Data System (ADS)
Nord, Mark; Cafiero, Carlo; Viviani, Sara
2016-11-01
Statistical methods based on item response theory are applied to experiential food insecurity survey data from 147 countries, areas, and territories to assess data quality and develop methods to estimate national prevalence rates of moderate and severe food insecurity at equal levels of severity across countries. Data were collected from nationally representative samples of 1,000 adults in each country. A Rasch-model-based scale was estimated for each country, and data were assessed for consistency with model assumptions. A global reference scale was calculated based on item parameters from all countries. Each country's scale was adjusted to the global standard, allowing for up to 3 of the 8 scale items to be considered unique in that country if their deviance from the global standard exceeded a set tolerance. With very few exceptions, data from all countries were sufficiently consistent with model assumptions to constitute reasonably reliable measures of food insecurity and were adjustable to the global standard with fair confidence. National prevalence rates of moderate-or-severe food insecurity assessed over a 12-month recall period ranged from 3 percent to 92 percent. The correlations of national prevalence rates with national income, health, and well-being indicators provide external validation of the food security measure.
Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.
Fu, H; Darroch, J E; Haas, T; Ranjit, N
1999-01-01
Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by personal and background characteristics. Income's strong influence on contraceptive failure suggests that access barriers and the general disadvantage associated with poverty seriously impede effective contraceptive practice in the United States.
Comparison of four statistical and machine learning methods for crash severity prediction.
Iranitalab, Amirfarrokh; Khattak, Aemal
2017-11-01
Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, B. P.; Abbott, R.; Abernathy, M. R.
This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work we reported various rate estimates whose 90% confidence intervals fell in the range 2–600 Gpc{sup −3} yr{sup −1}. Here we give details on our method and computations, including information about our search pipelines, a derivation of our likelihood function for the analysis, a description of the astrophysical search trigger distribution expected from merging BBHs, details on our computational methods, a description of the effects and our model for calibration uncertainty,more » and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.« less
Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.
2013-01-01
A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777
Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data
NASA Technical Reports Server (NTRS)
Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.
2004-01-01
A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.
NASA Astrophysics Data System (ADS)
Xu, L.; Chanton, J.; McDermitt, D. K.; Li, J.; Green, R. B.
2015-12-01
Methane plays a critical role in the radiation balance and chemistry of the atmosphere. Globally, landfill methane emission contributes about 10-19% of the anthropogenic methane burden into the atmosphere. In the United States, 18% of annual anthropogenic methane emissions come from landfills, which represent the third largest source of anthropogenic methane emissions, behind enteric fermentation and natural gas and oil production. One uncertainty in estimating landfill methane emissions is the fraction of methane oxidized when methane produced under anaerobic conditions passes through the cover soil. We developed a simple stoichiometric model to estimate methane oxidation fraction when the anaerobic CO2 / CH4 production ratio is known, or can be estimated. The model predicts a linear relationship between CO2 emission rates and CH4 emission rates, where the slope depends on anaerobic CO2 / CH4 production ratio and the fraction of methane oxidized, and the intercept depends on non-methane-dependent oxidation processes. The model was tested using carbon dioxide emission rates (fluxes) and methane emission rates (fluxes) measured using the eddy covariance method over a one year period at the Turkey Run landfill in Georgia, USA. The CO2 / CH4 production ratio was estimated by measuring CO2 and CH4 concentrations in air sampled under anaerobic conditions deep inside the landfill. We also used a mass balance approach to independently estimate fractional oxidation based on stable isotope measurements (δ13C of methane) of gas samples taken from deep inside the landfill and just above the landfill surface. Results from the two independent methods agree well. The model will be described and methane oxidation will be discussed in relation to wind direction, location at the landfill, and age of the deposited refuse.
NASA Astrophysics Data System (ADS)
Miyatake, Hirokazu; Yoshizawa, Nobuaki; Hirakawa, Sachiko; Murakami, Kana; Takizawa, Mari; Kawai, Masaki; Sato, Osamu; Takagi, Shunji; Suzuki, Gen
2017-09-01
The Fukushima Daiichi Nuclear Power Plant accident caused a release of radionuclides. Radionuclides were deposited on the ground not only in Fukushima prefecture but also in nearby prefectures. Since the accident, measurement of radiation in environment such as air dose rate and deposition density of radionuclides has been performed by many organizations and universities. In particular, Japan Atomic Energy Agency (JAEA) has been performing observations of air dose rate using a car-borne survey system continuously and over wide areas. In our study, using the data measured by JAEA, we estimated effective dose from external exposure in the six prefectures adjacent to Fukushima prefecture. Since car-borne survey was started a few months later after the accident, measured air dose rate in this method is mainly contributed by 137Cs and 134Cs whose half-lives are relatively long. Therefore, based on air dose rate of 137Cs and 134Cs and the ratio of deposition density of short-half-life nuclides to that of 137Cs and 134Cs, we also estimated effective dose contributed from not only 137Cs and 134Cs but also other short-half-life nuclides. We compared the effective dose estimated by the method above with that of UNSCEAR and measured data using personal dosimeters in some areas.
Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent
2005-01-01
Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508
Receiver IQ mismatch estimation in PDM CO-OFDM system using training symbol
NASA Astrophysics Data System (ADS)
Peng, Dandan; Ma, Xiurong; Yao, Xin; Zhang, Haoyuan
2017-07-01
Receiver in-phase/quadrature (IQ) mismatch is hard to mitigate at the receiver via using conventional method in polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. In this paper, a novel training symbol structure is proposed to estimate IQ mismatch and channel distortion. Combined this structure with Gram Schmidt orthogonalization procedure (GSOP) algorithm, we can get lower bit error rate (BER). Meanwhile, based on this structure one estimation method is deduced in frequency domain which can achieve the estimation of IQ mismatch and channel distortion independently and improve the system performance obviously. Numerical simulation shows that the proposed two methods have better performance than compared method at 100 Gb/s after 480 km fiber transmission. Besides, the calculation complexity is also analyzed.
National suicide rates a century after Durkheim: do we know enough to estimate error?
Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W
2010-06-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.
An empirical method for estimating travel times for wet volcanic mass flows
Pierson, Thomas C.
1998-01-01
Travel times for wet volcanic mass flows (debris avalanches and lahars) can be forecast as a function of distance from source when the approximate flow rate (peak discharge near the source) can be estimated beforehand. The near-source flow rate is primarily a function of initial flow volume, which should be possible to estimate to an order of magnitude on the basis of geologic, geomorphic, and hydrologic factors at a particular volcano. Least-squares best fits to plots of flow-front travel time as a function of distance from source provide predictive second-degree polynomial equations with high coefficients of determination for four broad size classes of flow based on near-source flow rate: extremely large flows (>1 000 000 m3/s), very large flows (10 000–1 000 000 m3/s), large flows (1000–10 000 m3/s), and moderate flows (100–1000 m3/s). A strong nonlinear correlation that exists between initial total flow volume and flow rate for "instantaneously" generated debris flows can be used to estimate near-source flow rates in advance. Differences in geomorphic controlling factors among different flows in the data sets have relatively little effect on the strong nonlinear correlations between travel time and distance from source. Differences in flow type may be important, especially for extremely large flows, but this could not be evaluated here. At a given distance away from a volcano, travel times can vary by approximately an order of magnitude depending on flow rate. The method can provide emergency-management officials a means for estimating time windows for evacuation of communities located in hazard zones downstream from potentially hazardous volcanoes.
Comparative study of soil erodibility and critical shear stress between loess and purple soils
NASA Astrophysics Data System (ADS)
Xing, Hang; Huang, Yu-han; Chen, Xiao-yan; Luo, Bang-lin; Mi, Hong-xing
2018-03-01
Loess and purple soils are two very important cultivated soils, with the former in the loess region and the latter in southern sub-tropical region of China, featured with high-risks of erosion, considerable differences of soil structures due to differences in mineral and nutrient compositions. Study on soil erodibility (Kr) and critical shear stress (τc) of these two soils is beneficial to predict soil erosion with such models as WEPP. In this study, rill erosion experimental data sets of the two soils are used for estimating their Kr and τc before they are compared to understand their differences of rill erosion behaviors. The maximum detachment rates of the loess and purple soils are calculated under different hydrodynamic conditions (flow rates: 2, 4, 8 L/min; slope gradients: 5°, 10°, 15°, 20°, 25°) through analytical and numerical methods respectively. Analytical method used the derivative of the function between sediment concentration and rill length to estimate potential detachment rates, at the rill beginning. Numerical method estimated potential detachment rates with the experimental data, at the rill beginning and 0.5 m location. The Kr and τc of these two soils are determined by the linear equation based on experimental data. Results show that the methods could well estimate the Kr and τc of these two soils as they remain basically unchanged under different hydrodynamic conditions. The Kr value of loess soil is about twice of the purple soil, whereas the τc is about half of that. The numerical results have good correlations with the analytical values. These results can be useful in modeling rill erosion processes of loess and purple soils.
Methanogenic activity tests by Infrared Tunable Diode Laser Absorption Spectroscopy.
Martinez-Cruz, Karla; Sepulveda-Jauregui, Armando; Escobar-Orozco, Nayeli; Thalasso, Frederic
2012-10-01
Methanogenic activity (MA) tests are commonly carried out to estimate the capability of anaerobic biomass to treat effluents, to evaluate anaerobic activity in bioreactors or natural ecosystems, or to quantify inhibitory effects on methanogenic activity. These activity tests are usually based on the measurement of the volume of biogas produced by volumetric, pressure increase or gas chromatography (GC) methods. In this study, we present an alternative method for non-invasive measurement of methane produced during activity tests in closed vials, based on Infrared Tunable Diode Laser Absorption Spectroscopy (MA-TDLAS). This new method was tested during model acetoclastic and hydrogenotrophic methanogenic activity tests and was compared to a more traditional method based on gas chromatography. From the results obtained, the CH(4) detection limit of the method was estimated to 60 ppm and the minimum measurable methane production rate was estimated to 1.09(.)10(-3) mg l(-1) h(-1), which is below CH(4) production rate usually reported in both anaerobic reactors and natural ecosystems. Additionally to sensitivity, the method has several potential interests compared to more traditional methods among which short measurements time allowing the measurement of a large number of MA test vials, non-invasive measurements avoiding leakage or external interferences and similar cost to GC based methods. It is concluded that MA-TDLAS is a promising method that could be of interest not only in the field of anaerobic digestion but also, in the field of environmental ecology where CH(4) production rates are usually very low. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.
Leung, Denis H Y; Wang, You-Gan; Zhu, Min
2009-07-01
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Avasthi, Ajit; Basu, Debasish; Subodh, B. N.; Gupta, Pramod K.; Malhotra, Nidhi; Rani, Poonam; Sharma, Sunil
2017-01-01
Background: Substance misuse is a matter of major public health concern in India. House-to-house survey, though an appealing method to generate population-level estimates, has limitations for estimating prevalence rates of use of illicit and rare substances. Materials and Methods: In this rapid assessment survey (RAS), respondent-driven sampling was used to recruit substance-using individuals from the field. Size of the substance-using population was estimated using the “benchmark-multiplier” method. This figure was then projected to the entire population of the Union Territory (U.T) of Chandigarh. Focused group discussions were used to study the perceptions and views of the substance users regarding various aspects of substance use. Results: Prevalence of any substance dependence in the U.T of Chandigarh was estimated to be 4.65%. Dependence rates on opioids, cannabinoids, and sedative hypnotics were found to be 1.53%, 0.52%, and 0.015%, respectively. Prevalence of injectable opioids was calculated to be 0.91%. Injectable buprenorphine was the most commonly used opioid, followed by bhukhi/doda/opium and heroin. A huge gap was found between the prevalence rates of substance-using population and those seeking treatment. Conclusion: RAS can be a useful method to determine the prevalence of illicit and rare substances. Our survey shows that the use of substance including that of opioids is highly prevalent in the U.T of Chandigarh. The findings of this survey can have implications for policymaking. PMID:29085086
Advanced techniques for modeling avian nest survival
Dinsmore, S.J.; White, Gary C.; Knopf, F.L.
2002-01-01
Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models and allowed us to generate more biologically meaningful estimates of nest survival.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, Norwood B.; Walker, J.F.
1992-01-01
Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Gérardin, Patrick; Guernier, Vanina; Perrau, Joëlle; Fianu, Adrian; Le Roux, Karin; Grivard, Philippe; Michault, Alain; de Lamballerie, Xavier; Flahault, Antoine; Favier, François
2008-01-01
Background Chikungunya virus (CHIKV) caused a major two-wave seventeen-month-long outbreak in La Réunion Island in 2005–2006. The aim of this study was to refine clinical estimates provided by a regional surveillance-system using a two-stage serological assessment as gold standard. Methods Two serosurveys were implemented: first, a rapid survey using stored sera of pregnant women, in order to assess the attack rate at the epidemic upsurge (s1, February 2006; n = 888); second, a population-based survey among a random sample of the community, to assess the herd immunity in the post-epidemic era (s2, October 2006; n = 2442). Sera were screened for anti-CHIKV specific antibodies (IgM and IgG in s1, IgG only in s2) using enzyme-linked immunosorbent assays. Seroprevalence rates were compared to clinical estimates of attack rates. Results In s1, 18.2% of the pregnant women were tested positive for CHIKV specific antibodies (13.8% for both IgM and IgG, 4.3% for IgM, 0.1% for IgG only) which provided a congruent estimate with the 16.5% attack rate calculated from the surveillance-system. In s2, the seroprevalence in community was estimated to 38.2% (95% CI, 35.9 to 40.6%). Extrapolations of seroprevalence rates led to estimate, at 143,000 and at 300,000 (95% CI, 283,000 to 320,000), the number of people infected in s1 and in s2, respectively. In comparison, the surveillance-system estimated at 130,000 and 266,000 the number of people infected for the same periods. Conclusion A rapid serosurvey in pregnant women can be helpful to assess the attack rate when large seroprevalence studies cannot be done. On the other hand, a population-based serosurvey is useful to refine the estimate when clinical diagnosis underestimates it. Our findings give valuable insights to assess the herd immunity along the course of epidemics. PMID:18662384
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
Thomas, Carole L.; Stewart, Amy E.; Constantz, Jim E.
2000-01-01
Two methods, one a surface-water method and the second a ground-water method, were used to determine infiltration and percolation rates along a 2.5-kilometer reach of the Santa Fe River near La Bajada, New Mexico. The surface-water method uses streamflow measurements and their differences along a stream reach, streamflow-loss rates, stream surface area, and evaporation rates to determine infiltration rates. The ground-water method uses heat as a tracer to monitor percolation through shallow streambed sediments. Data collection began in October 1996 and continued through December 1997. During that period the stream reach was instrumented with three streamflow gages, and temperature profiles were monitored from the stream-sediment interface to about 3 meters below the streambed at four sites along the reach. Infiltration is the downward flow of water through the stream- sediment interface. Infiltration rates ranged from 92 to 267 millimeters per day for an intense measurement period during June 26- 28, 1997, and from 69 to 256 millimeters per day during September 27-October 6, 1997. Investigators calculated infiltration rates from streamflow loss, stream surface-area measurements, and evaporation-rate estimates. Infiltration rates may be affected by unmeasured irrigation-return flow in the study reach. Although the amount of irrigation-return flow was none to very small, it may result in underestimation of infiltration rates. The infiltration portion of streamflow loss was much greater than the evaporation portion. Infiltration accounted for about 92 to 98 percent of streamflow loss. Evaporation-rate estimates ranged from 3.4 to 7.6 millimeters per day based on pan-evaporation data collected at Cochiti Dam, New Mexico, and accounted for about 2 to 8 percent of streamflow loss. Percolation is the movement of water through saturated or unsaturated sediments below the stream-sediment interface. Percolation rates ranged from 40 to 109 millimeters per day during June 26-28, 1997. Percolation rates were not calculated for the September 27-October 6, 1997, period because a late summer flood removed the temperature sensors from the streambed. Investigators used a heat-and-water flow model, VS2DH (variably saturated, two- dimensional heat), to calculate near-surface streambed infiltration and percolation rates from temperatures measured in the stream and streambed. Near the stream-sediment interface, infiltration and percolation rates are comparable. Comparison of infiltration and percolation rates showed that infiltration rates were greater than percolation rates. The method used to calculate infiltration rates accounted for net loss or gain over the entire stream reach, whereas the method used to calculate percolation was dependent on point measurements and, as applied in this study, neglected the nonvertical component of heat and water fluxes. In general, using the ground-water method was less labor intensive than making a series of streamflow measurements and relied on temperature, an easily measured property. The ground-water method also eliminated the difficulty of measuring or estimating evaporation from the water surface and was therefore more direct. Both methods are difficult to use during periods of flood flow. The ground-water method has problems with the thermocouple-wire temperature sensors washing out during flood events. The surface- water method often cannot be used because of safety concerns for personnel making wading streamflow measurements.
Bending spring rate investigation of nanopipette for cell injection.
Shen, Yajing; Zhang, Zhenhai; Fukuda, Toshio
2015-04-17
Bending of nanopipette tips during cell penetration is a major cause of cell injection failure. However, the flexural rigidity of nanopipettes is little known due to their irregular structure. In this paper, we report a quantitative method to estimate the flexural rigidity of a nanopipette by investigating its bending spring rate. First nanopipettes with a tip size of 300 nm are fabricated from various glass tubes by laser pulling followed by focused ion beam (FIB) milling. Then the bending spring rate of the nanopipettes is investigated inside a scanning electron microscope (SEM). Finally, a yeast cell penetration test is performed on these nanopipettes, which have different bending spring rates. The results show that nanopipettes with a higher bending spring rate have better cell penetration capability, which confirms that the bending spring rate may well reflect the flexural rigidity of a nanopipette. This method provides a quantitative parameter for characterizing the mechanical property of a nanopipette that can be potentially taken as a standard specification in the future. This general method can also be used to estimate other one-dimensional structures for cell injection, which will greatly benefit basic cell biology research and clinical applications.
Bending spring rate investigation of nanopipette for cell injection
NASA Astrophysics Data System (ADS)
Shen, Yajing; Zhang, Zhenhai; Fukuda, Toshio
2015-04-01
Bending of nanopipette tips during cell penetration is a major cause of cell injection failure. However, the flexural rigidity of nanopipettes is little known due to their irregular structure. In this paper, we report a quantitative method to estimate the flexural rigidity of a nanopipette by investigating its bending spring rate. First nanopipettes with a tip size of 300 nm are fabricated from various glass tubes by laser pulling followed by focused ion beam (FIB) milling. Then the bending spring rate of the nanopipettes is investigated inside a scanning electron microscope (SEM). Finally, a yeast cell penetration test is performed on these nanopipettes, which have different bending spring rates. The results show that nanopipettes with a higher bending spring rate have better cell penetration capability, which confirms that the bending spring rate may well reflect the flexural rigidity of a nanopipette. This method provides a quantitative parameter for characterizing the mechanical property of a nanopipette that can be potentially taken as a standard specification in the future. This general method can also be used to estimate other one-dimensional structures for cell injection, which will greatly benefit basic cell biology research and clinical applications.
Estimating rock and slag wool fiber dissolution rate from composition.
Eastes, W; Potter, R M; Hadley, J G
2000-12-01
A method was tested for calculating the dissolution rate constant in the lung for a wide variety of synthetic vitreous silicate fibers from the oxide composition in weight percent. It is based upon expressing the logarithm of the dissolution rate as a linear function of the composition and using a different set of coefficients for different types of fibers. The method was applied to 29 fiber compositions including rock and slag fibers as well as refractory ceramic and special-purpose, thin E-glass fibers and borosilicate glass fibers for which in vivo measurements have been carried out. These fibers had dissolution rates that ranged over a factor of about 400, and the calculated dissolution rates agreed with the in vivo values typically within a factor of 4. The method presented here is similar to one developed previously for borosilicate glass fibers that was accurate to a factor of 1.25. The present coefficients work over a much broader range of composition than the borosilicate ones but with less accuracy. The dissolution rate constant of a fiber may be used to estimate whether disease would occur in animal inhalation or intraperitoneal injection studies of that fiber.
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Comparison of Time-to-First Event and Recurrent Event Methods in Randomized Clinical Trials.
Claggett, Brian; Pocock, Stuart; Wei, L J; Pfeffer, Marc A; McMurray, John J V; Solomon, Scott D
2018-03-27
Background -Most Phase-3 trials feature time-to-first event endpoints for their primary and/or secondary analyses. In chronic diseases where a clinical event can occur more than once, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared to conventional "time-to-first" methods. Methods -To better characterize factors that influence statistical properties of recurrent-events and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active vs control therapy, with true patient-level risk reduction of 20% (i.e. RR=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied a) the degree of between-patient heterogeneity of risk and b) the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results -As the degree of between-patient heterogeneity of risk was increased, both time-to-first and recurrent-events methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-events analyses continued to estimate the true RR=0.80 as heterogeneity increased, while the Cox model produced estimates that were attenuated. The power of recurrent-events methods declined as the rate of study drug discontinuation post-event increased. Recurrent-events methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% following a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials in chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-events methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions -We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.
A Method for Measuring Collection Expansion Rates and Shelf Space Capacities.
ERIC Educational Resources Information Center
Sapp, Gregg; Suttle, George
1994-01-01
Describes an effort to quantify annual collection expansion and shelf space capacities with a computer spreadsheet program. Methods used to quantify the space taken at the beginning of the project; to estimate annual rate of collection growth; and to plot stack space and usage, volume equivalents and usage, and growth capacity are covered.…
2011-01-01
Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS). Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Results Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. Conclusions This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza. PMID:21486453
O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.
2006-01-01
Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.
Robust estimation of fetal heart rate from US Doppler signals
NASA Astrophysics Data System (ADS)
Voicu, Iulian; Girault, Jean-Marc; Roussel, Catherine; Decock, Aliette; Kouame, Denis
2010-01-01
Introduction: In utero, Monitoring of fetal wellbeing or suffering is today an open challenge, due to the high number of clinical parameters to be considered. An automatic monitoring of fetal activity, dedicated for quantifying fetal wellbeing, becomes necessary. For this purpose and in a view to supply an alternative for the Manning test, we used an ultrasound multitransducer multigate Doppler system. One important issue (and first step in our investigation) is the accurate estimation of fetal heart rate (FHR). An estimation of the FHR is obtained by evaluating the autocorrelation function of the Doppler signals for ills and healthiness foetus. However, this estimator is not enough robust since about 20% of FHR are not detected in comparison to a reference system. These non detections are principally due to the fact that the Doppler signal generated by the fetal moving is strongly disturbed by the presence of others several Doppler sources (mother' s moving, pseudo breathing, etc.). By modifying the existing method (autocorrelation method) and by proposing new time and frequency estimators used in the audio' s domain, we reduce to 5% the probability of non-detection of the fetal heart rate. These results are really encouraging and they enable us to plan the use of automatic classification techniques in order to discriminate between healthy and in suffering foetus.
NASA Astrophysics Data System (ADS)
Omar, Mahmoud A.; Badr El-Din, Khalid M.; Salem, Hesham; Abdelmageed, Osama H.
2018-03-01
A simple, selective and sensitive kinetic spectrophotometric method was described for estimation of four phenolic sympathomimetic drugs namely; terbutaline sulfate, fenoterol hydrobromide, isoxsuprine hydrochloride and etilefrine hydrochloride. This method is depended on the oxidation of the phenolic drugs with Folin-Ciocalteu reagent in presence of sodium carbonate. The rate of color development at 747-760 nm was measured spectrophotometrically. The experimental parameters controlling the color development were fully studied and optimized. The reaction mechanism for color development was proposed. The calibration graphs for both the initial rate and fixed time methods were constructed, where linear correlations were found in the general concentration ranges of 3.65 × 10- 6-2.19 × 10- 5 mol L- 1 and 2-24.0 μg mL- 1 with correlation coefficients in the following range 0.9992-0.9999, 0.9991-0.9998 respectively. The limits of detection and quantitation for the initial rate and fixed time methods were found to be in general concentration range 0.109-0.273, 0.363-0.910 and 0.210-0.483, 0.700-1.611 μg mL- 1 respectively. The developed method was validated according to ICH and USP 30 -NF 25 guidelines. The suggested method was successfully implemented to the estimation of these drugs in their commercial pharmaceutical formulations and the recovery percentages obtained were ranged from 97.63% ± 1.37 to 100.17% ± 0.95 and 97.29% ± 0.74 to 100.14 ± 0.81 for initial rate and fixed time methods respectively. The data obtained from the analysis of dosage forms were compared with those obtained by reported methods. Statistical analysis of these results indicated no significant variation in the accuracy and precision of both the proposed and reported methods.
Evaluation of trapping-web designs
Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.
2005-01-01
The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.
[Chromosome as a chronicler: Genetic dating, historical events, and DNA-genealogic temptation].
Balanovsky, O P; Zaporozhchenko, V V
2016-07-01
Nonrecombinant portions of the genome, Y chromosome and mitochondrial DNA, are widely used for research on human population gene pools and reconstruction of their history. These systems allow the genetic dating of clusters of emerging haplotypes. The main method for age estimations is ρ statistics, which is an average number of mutations from founder haplotype to all modern-day haplotypes. A researcher can estimate the age of the cluster by multiplying this number by the mutation rate. The second method of estimation, ASD, is used for STR haplotypes of the Y chromosome and is based on the squared difference in the number of repeats. In addition to the methods of calculation, methods of Bayesian modeling assume a new significance. They have greater computational cost and complexity, but they allow obtaining an a posteriori distribution of the value of interest that is the most consistent with experimental data. The mutation rate must be known for both calculation methods and modeling methods. It can be determined either during the analysis of lineages or by providing calibration points based on populations with known formation time. These two approaches resulted in rate estimations for Y-chromosomal STR haplotypes with threefold difference. This contradiction was only recently refuted through the use of sequence data for the complete Y chromosome; “whole-genomic” rates of single nucleotide mutations obtained by both methods are mutually consistent and mark the area of application for different rates of STR markers. An issue even more crucial than that of the rates is correlation of the reconstructed history of the haplogroup (a cluster of haplotypes) and the history of the population. Although the need for distinguishing “lineage history” and “population history” arose in the earliest days of phylogeographic research, reconstructing the population history using genetic dating requires a number of methods and conditions. It is known that population history events leave distinct traces in the history of haplogroups only under certain demographic conditions. Direct identification of national history with the history of its occurring haplogroups is inappropriate and is avoided in population genetic studies, although because of its simplicity and attractiveness it is a constant temptation for researchers. An example of DNA genealogy, an amateur field that went beyond the borders of even citizen science and is consistently using the principle of equating haplogroup with lineage and population, which leads to absurd results (e.g., Eurasia as an origin of humankind), can serve as a warning against a simplified approach for interpretation of genetic dating results.
Managing Pacific salmon escapements: The gaps between theory and reality
Knudsen, E. Eric; Knudsen, E. Eric; Steward, Cleveland R.; MacDonald, Donald D.; Williams, Jack E.; Reiser, Dudley W.
1999-01-01
There are myriad challenges to estimating intrinsic production capacity for Pacific salmon populations that are heavily exploited and/or suffering from habitat alteration. Likewise, it is difficult to determine whether perceived decreases in production are due to harvest, habitat, or hatchery influences, natural variation, or some combination of all four. There are dramatic gaps between the true nature of the salmon spawner/recruit relationship and the theoretical basis for describing and understanding the relationship. Importantly, there are also extensive practical difficulties associated with gathering and interpreting accurate escapement and run-size information and applying it to population management. Paradoxically, certain aspects of salmon management may well be contributing to losses in abundance and biodiversity, including harvesting salmon in mixed population fisheries, grouping populations into management units subject to a common harvest rate, and fully exploiting all available hatchery fish at the expense of wild fish escapements. Information on U.S. Pacific salmon escapement goal-setting methods, escapement data collection methods and estimation types, and the degree to which stocks are subjected to mixed stock fisheries was summarized and categorized for 1,025 known management units consisting of 9,430 known populations. Using criteria developed in this study, only 1% of U.S. escapement goals are by methods rated as excellent. Escapement goals for 16% of management units were rated as good. Over 60% of escapement goals have been set by methods rated as either fair or poor and 22% of management units have no escapement goals at all. Of the 9,430 populations for which any information was available, 6,614 (70%) had sufficient information to categorize the method by which escapement data are collected. Of those, data collection methods were rated as excellent for 1%, good for 1%, fair for 2%, and poor for 52%. Escapement estimates are not made for 44% of populations. Escapement estimation type (quality of the data resulting from survey methods) was rated as excellent for <1%, good for 30%, fair for 3%, poor for 22%, and nonexistent for 45%. Numerous recommendations for improvements in escapement mangement are made in this chapter. In general, improvements are needed on theoretical escapement management techniques, escapement goal setting methods, and escapement and run size data quality. There is also a need to change managers' and harvesters' expectations to coincide with the natural variation and uncertainty in the abundance of salmon populations. All the recommendations are aimed at optimizing the number of spawners-healthy escapements ensure salmon sustainability by providing eggs for future production, nutrients to the system, and genetic diversity.
ERIC Educational Resources Information Center
Christ, Theodore J.; Monaghen, Barbara D.; Zopluoglu, Cengiz; Van Norman, Ethan R.
2013-01-01
Curriculum-based measurement of oral reading (CBM-R) is used to index the level and rate of student growth across the academic year. The method is frequently used to set student goals and monitor student progress. This study examined the diagnostic accuracy and quality of growth estimates derived from pre-post measurement using CBM-R data. A…
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Comparison of local- to regional-scale estimates of ground-water recharge in Minnesota, USA
Delin, G.N.; Healy, R.W.; Lorenz, D.L.; Nimmo, J.R.
2007-01-01
Regional ground-water recharge estimates for Minnesota were compared to estimates made on the basis of four local- and basin-scale methods. Three local-scale methods (unsaturated-zone water balance, water-table fluctuations (WTF) using three approaches, and age dating of ground water) yielded point estimates of recharge that represent spatial scales from about 1 to about 1000 m2. A fourth method (RORA, a basin-scale analysis of streamflow records using a recession-curve-displacement technique) yielded recharge estimates at a scale of 10–1000s of km2. The RORA basin-scale recharge estimates were regionalized to estimate recharge for the entire State of Minnesota on the basis of a regional regression recharge (RRR) model that also incorporated soil and climate data. Recharge rates estimated by the RRR model compared favorably to the local and basin-scale recharge estimates. RRR estimates at study locations were about 41% less on average than the unsaturated-zone water-balance estimates, ranged from 44% greater to 12% less than estimates that were based on the three WTF approaches, were about 4% less than the age dating of ground-water estimates, and were about 5% greater than the RORA estimates. Of the methods used in this study, the WTF method is the simplest and easiest to apply. Recharge estimates made on the basis of the UZWB method were inconsistent with the results from the other methods. Recharge estimates using the RRR model could be a good source of input for regional ground-water flow models; RRR model results currently are being applied for this purpose in USGS studies elsewhere.
Thelin, Gail P.; Stone, Wesley W.
2013-01-01
A method was developed to calculate annual county level pesticide use for selected herbicides, insecticides, and fungicides applied to agricultural crops grown in the conterminous United States from 1992 through 2009. Pesticide-use data compiled by proprietary surveys of farm operations located within Crop Reporting Districts were used in conjunction with annual harvested-crop acreage reported by the U.S. Department of Agriculture National Agricultural Statistics Service (NASS) to calculate use rates per harvested crop acre, or an 'estimated pesticide use' (EPest) rate, for each crop by year. Pesticide-use data were not available for all Crop Reporting Districts and years. When data were unavailable for a Crop Reporting District in a particular year, EPest extrapolated rates were calculated from adjoining or nearby Crop Reporting Districts to ensure that pesticide use was estimated for all counties that reported harvested-crop acreage. EPest rates were applied to county harvested-crop acreage differently to obtain EPest-low and EPest-high estimates of pesticide-use for counties and states, with the exception of use estimates for California, which were taken from annual Department of Pesticide Regulation Pesticide Use Reports. Annual EPest-low and EPest-high use totals were compared with other published pesticide-use reports for selected pesticides, crops, and years. EPest-low and EPest-high national totals for five of seven herbicides were in close agreement with U.S. Environmental Protection Agency and National Pesticide Use Data estimates, but greater than most NASS national totals. A second set of analyses compared EPest and NASS annual state totals and state-by-crop totals for selected crops. Overall, EPest and NASS use totals were not significantly different for the majority of crop-stateyear combinations evaluated. Furthermore, comparisons of EPest and NASS use estimates for most pesticides had rank correlation coefficients greater than 0.75 and median relative errors of less than 15 percent. Of the 48 pesticide-by-crop combinations with 10 or more state-year combinations, 12 of the EPest-low and 17 of the EPest-high totals showed significant differences (p < 0.05) from NASS use estimates. The differences between EPest and NASS estimates did not follow consistent patterns related to particular crops, years, or states, and most correlation coefficients were greater than 0.75. EPest values from this study are suitable for making national, regional, and watershed assessments of annual pesticide use from 1992 to 2009. Although estimates are provided by county to facilitate estimation of watershed pesticide use for a wide variety of watersheds, there is a greater degree of uncertainty in individual county-level estimates when compared to Crop Reporting District or state-level estimates because (1) EPest crop-use rates were developed on the basis of pesticide use on harvested acres in multi-county areas (Crop Reporting Districts) and then allocated to county harvested cropland; (2) pesticide-by-crop use rates were not available for all Crop Reporting Districts in the conterminous United States, and extrapolation methods were used to estimate pesticide use for some counties; and (3) it is possible that surveyed pesticide-by-crop use rates do not reflect all agricultural use on all crops grown. The methods developed in this study also are applicable to other agricultural pesticides and years.
Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M
2018-06-01
Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
One-day rate measurements for estimating net nitrification potential in humid forest soils
Ross, D.S.; Fredriksen, G.; Jamison, A.E.; Wemple, B.C.; Bailey, S.W.; Shanley, J.B.; Lawrence, G.B.
2006-01-01
Measurements of net nitrification rates in forest soils have usually been performed by extended sample incubation (2-8 weeks), either in the field or in the lab. Because of disturbance effects, these measurements are only estimates of nitrification potential and shorter incubations may suffice. In three separate studies of northeastern USA forest soil surface horizons, we found that laboratory nitrification rates measured over 1 day related well to those measured over 4 weeks. Soil samples of Oa or A horizons were mixed by hand and the initial extraction of subsamples, using 2 mol L-1 KCl, occurred in the field as soon as feasible after sampling. Soils were kept near field temperature and subsampled again the following day in the laboratory. Rates measured by this method were about three times higher than the 4-week rates. Variability in measured rates was similar over either incubation period. Because NO3- concentrations were usually quite low in the field, average rates from 10 research watersheds could be estimated with only a single, 1-day extraction. Methodological studies showed that the concentration of NH4+ increased slowly during contact time with the KCl extractant and, thus, this contact time should be kept similar during the procedure. This method allows a large number of samples to be rapidly assessed. ?? 2006 Elsevier B.V. All rights reserved.
Phytoplankton production and taxon-specific growth rates in the Costa Rica Dome
Selph, Karen E.; Landry, Michael R.; Taylor, Andrew G.; Gutiérrez-Rodríguez, Andrés; Stukel, Michael R.; Wokuluk, John; Pasulka, Alexis
2016-01-01
During summer 2010, we investigated phytoplankton production and growth rates at 19 stations in the eastern tropical Pacific, where winds and strong opposing currents generate the Costa Rica Dome (CRD), an open-ocean upwelling feature. Primary production (14C-incorporation) and group-specific growth and net growth rates (two-treatment seawater dilution method) were estimated from samples incubated in situ at eight depths. Our cruise coincided with a mild El Niño event, and only weak upwelling was observed in the CRD. Nevertheless, the highest phytoplankton abundances were found near the dome center. However, mixed-layer growth rates were lowest in the dome center (∼0.5–0.9 day−1), but higher on the edge of the dome (∼0.9–1.0 day−1) and in adjacent coastal waters (0.9–1.3 day−1). We found good agreement between independent methods to estimate growth rates. Mixed-layer growth rates of Prochlorococcus and Synechococcus were largely balanced by mortality, whereas eukaryotic phytoplankton showed positive net growth (∼0.5–0.6 day−1), that is, growth available to support larger (mesozooplankton) consumer biomass. These are the first group-specific phytoplankton rate estimates in this region, and they demonstrate that integrated primary production is high, exceeding 1 g C m−2 day−1 on average, even during a period of reduced upwelling. PMID:27275025
Phytoplankton production and taxon-specific growth rates in the Costa Rica Dome.
Selph, Karen E; Landry, Michael R; Taylor, Andrew G; Gutiérrez-Rodríguez, Andrés; Stukel, Michael R; Wokuluk, John; Pasulka, Alexis
2016-03-01
During summer 2010, we investigated phytoplankton production and growth rates at 19 stations in the eastern tropical Pacific, where winds and strong opposing currents generate the Costa Rica Dome (CRD), an open-ocean upwelling feature. Primary production ( 14 C-incorporation) and group-specific growth and net growth rates (two-treatment seawater dilution method) were estimated from samples incubated in situ at eight depths. Our cruise coincided with a mild El Niño event, and only weak upwelling was observed in the CRD. Nevertheless, the highest phytoplankton abundances were found near the dome center. However, mixed-layer growth rates were lowest in the dome center (∼0.5-0.9 day -1 ), but higher on the edge of the dome (∼0.9-1.0 day -1 ) and in adjacent coastal waters (0.9-1.3 day -1 ). We found good agreement between independent methods to estimate growth rates. Mixed-layer growth rates of Prochlorococcus and Synechococcus were largely balanced by mortality, whereas eukaryotic phytoplankton showed positive net growth (∼0.5-0.6 day -1 ), that is, growth available to support larger (mesozooplankton) consumer biomass. These are the first group-specific phytoplankton rate estimates in this region, and they demonstrate that integrated primary production is high, exceeding 1 g C m -2 day -1 on average, even during a period of reduced upwelling.
A TRMM Rainfall Estimation Method Applicable to Land Areas
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.
1999-01-01
Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate. Based on this observation, in this study we have developed a method to estimate the mesoscale-average rain rate over land utilizing microwave radiometer data. Because of the high degree of geographic and seasonal variability in the nature and intensity of rain, this method requires some tuning with 15-minute rain gauge data on land. After tuning the method, it can be applied to an independent set of rain events that are close in time and space. We find that the mesoscale rain rates retrieved over the period of a month on land with this method shows a correlation of about 0.85 with respect to the surface rain-gauge observations. This mesoscale-average rain rate estimation method can be useful to extend the spatial and temporal coverage of the rainfall data provided by the Precipitation Radar on board the Tropical Rainfall Measuring Mission (TRMM) satellite.
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Transport parameter estimation from lymph measurements and the Patlak equation.
Watson, P D; Wolf, M B
1992-01-01
Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
2013-01-01
Introduction Estimation of kidney function in critically ill patients with acute kidney injury (AKI), is important for appropriate dosing of drugs and adjustment of therapeutic strategies, but challenging due to fluctuations in kidney function, creatinine metabolism and fluid balance. Data on the agreement between estimating and gold standard methods to assess glomerular filtration rate (GFR) in early AKI are lacking. We evaluated the agreement of urinary creatinine clearance (CrCl) and three commonly used estimating equations, the Cockcroft Gault (CG), the Modification of Diet in Renal Disease (MDRD) and the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations, in comparison to GFR measured by the infusion clearance of chromium-ethylenediaminetetraacetic acid (51Cr-EDTA), in critically ill patients with early AKI after complicated cardiac surgery. Methods Thirty patients with early AKI were studied in the intensive care unit, 2 to 12 days after complicated cardiac surgery. The infusion clearance for 51Cr-EDTA obtained as a measure of GFR (GFR51Cr-EDTA) was calculated from the formula: GFR (mL/min/1.73m2) = (51Cr-EDTA infusion rate × 1.73)/(arterial 51Cr-EDTA × body surface area) and compared with the urinary CrCl and the estimated GFR (eGFR) from the three estimating equations. Urine was collected in two 30-minute periods to measure urine flow and urine creatinine. Urinary CrCl was calculated from the formula: CrCl (mL/min/1.73m2) = (urine volume × urine creatinine × 1.73)/(serum creatinine × 30 min × body surface area). Results The within-group error was lower for GFR51Cr-EDTA than the urinary CrCl method, 7.2% versus 55.0%. The between-method bias was 2.6, 11.6, 11.1 and 7.39 ml/min for eGFRCrCl, eGFRMDRD, eGFRCKD-EPI and eGFRCG, respectively, when compared to GFR51Cr-EDTA. The error was 103%, 68.7%, 67.7% and 68.0% for eGFRCrCl, eGFRMDRD, eGFRCKD-EPI and eGFRCG, respectively, when compared to GFR51Cr-EDTA. Conclusions The study demonstrated poor precision of the commonly utilized urinary CrCl method for assessment of GFR in critically ill patients with early AKI, suggesting that this should not be used as a reference method when validating new methods for assessing kidney function in this patient population. The commonly used estimating equations perform poorly when estimating GFR, with high biases and unacceptably high errors. PMID:23767877
Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data
Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.
2013-01-01
Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946
Age estimation by amino acid racemization in human teeth.
Ohtani, Susumu; Yamamoto, Toshiharu
2010-11-01
When an unidentified body is found, it is essential to establish the personal identity of the body in addition to investigating the cause of death. Identification is one of the most important functions of forensic dentistry. Fingerprint, dental, and DNA analysis can be used to accurately identify a body. However, if no information is available for identification, age estimation can contribute to the resolution of a case. The authors have been using aspartic acid racemization rates in dentin (D-aspartic acid/L-aspartic acid: D/L Asp) as an index for age estimation and have obtained satisfactory results. We report five cases of age estimation using the racemization method. In all five cases, estimated ages were accurate within a range ±3 years. We conclude that the racemization method is a reliable and practical method for estimating age. © 2010 American Academy of Forensic Sciences.
Perceived noisiness under anechoic, semi-reverberant and earphone listening conditions
NASA Technical Reports Server (NTRS)
Clarke, F. R.; Kryter, K. D.
1972-01-01
Magnitude estimates by each of 31 listeners were obtained for a variety of noise sources under three methods of stimuli presentation: loudspeaker presentation in an anechoic chamber, loudspeaker presentation in a normal semi-reverberant room, and earphone presentation. Comparability of ratings obtained in these environments were evaluated with respect to predictability of ratings from physical measures, reliability of ratings, and to the scale values assigned to various noise stimuli. Acoustic environment was found to have little effect upon physical predictive measures and ratings of perceived noisiness were little affected by the acoustic environment in which they were obtained. The need for further study of possible differing interactions between judged noisiness of steady state sound and the methods of magnitude estimation and paired comparisons is indicated by the finding that in these tests the subjects, though instructed otherwise, apparently judged the maximum rather than the effective magnitude of steady-state noises.
Inferring time derivatives including cell growth rates using Gaussian processes
NASA Astrophysics Data System (ADS)
Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta
2016-12-01
Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.
A Group Contribution Method for Estimating Cetane and Octane Numbers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubic, William Louis
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contributionmore » methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.« less
Wakabayashi, Genichiro; Nohtomi, Akihiro; Yahiro, Eriko; Fujibuchi, Toshioh; Fukunaga, Junichi; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Nakamura, Katsumasa; Hosono, Makoto; Itoh, Tetsuo
2015-01-01
The applicability of the activation of an NaI scintillator for neutron monitoring at a clinical linac was investigated experimentally. Thermal neutron fluence rates are derived by measurement of the I-128 activity generated in an NaI scintillator irradiated by neutrons; β-rays from I-128 are detected efficiently by the NaI scintillator. In order to verify the validity of this method for neutron measurement, we irradiated an NaI scintillator at a research reactor, and the neutron fluence rate was estimated. The method was then applied to neutron measurement at a 10-MV linac (Varian Clinac 21EX), and the neutron fluence rate was estimated at the isocenter and at 30 cm from the isocenter. When the scintillator was irradiated directly by high-energy X-rays, the production of I-126 was observed due to photo-nuclear reactions, in addition to the generation of I-128 and Na-24. From the results obtained by these measurements, it was found that the neutron measurement by activation of an NaI scintillator has a great advantage in estimates of a low neutron fluence rate by use of a quick measurement following a short-time irradiation. Also, the future application of this method to quasi real-time monitoring of neutrons during patient treatments at a radiotherapy facility is discussed, as well as the method of evaluation of the neutron dose.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.
2015-10-01
To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.