Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Variation in polyp size estimation among endoscopists and impact on surveillance intervals.
Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren
2014-10-01
Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The pack size effect: Influence on consumer perceptions of portion sizes.
Hieke, Sophie; Palascha, Aikaterini; Jola, Corinne; Wills, Josephine; Raats, Monique M
2016-01-01
Larger portions as well as larger packs can lead to larger prospective consumption estimates, larger servings and increased consumption, described as 'portion-size effects' and 'pack size effects'. Although related, the effects of pack sizes on portion estimates have received less attention. While it is not possible to generalize consumer behaviour across cultures, external cues taken from pack size may affect us all. We thus examined whether pack sizes influence portion size estimates across cultures, leading to a general 'pack size effect'. We compared portion size estimates based on digital presentations of different product pack sizes of solid and liquid products. The study with 13,177 participants across six European countries consisted of three parts. Parts 1 and 2 asked participants to indicate the number of portions present in a combined photographic and text-based description of different pack sizes. The estimated portion size was calculated as the quotient of the content weight or volume of the food presented and the number of stated portions. In Part 3, participants stated the number of food items that make up a portion when presented with packs of food containing either a small or a large number of items. The estimated portion size was calculated as the item weight times the item number. For all three parts and across all countries, we found that participants' portion estimates were based on larger portions for larger packs compared to smaller packs (Part 1 and 2) as well as more items to make up a portion (Part 3); hence, portions were stated to be larger in all cases. Considering that the larger estimated portions are likely to be consumed, there are implications for energy intake and weight status. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nelson, M; Atkinson, M; Darbyshire, S
1996-07-01
The aim of the present study was to determine the errors in the conceptualization of portion size using photographs. Male and female volunteers aged 18-90 years (n 136) from a wide variety of social and occupational backgrounds completed 602 assessments of portion size in relation to food photographs. Subjects served themselves between four and six foods at one meal (breakfast, lunch or dinner). Portion sizes were weighed by the investigators at the time of serving, and any waste was weighed at the end of the meal. Within 5 min of the end of the meal, subjects were shown photographs depicting each of the foods just consumed. For each food there were eight photographs showing portion sizes in equal increments from the 5th to the 95th centile of the distribution of portion weights observed in The Dietary and Nutritional Survey of British Adults (Gregory et al. 1990). Subjects were asked to indicate on a visual analogue scale the size of the portion consumed in relation to the eight photographs. The nutrient contents of meals were estimated from food composition tables. There were large variations in the estimation of portion sizes from photographs. Butter and margarine portion sizes tended to be substantially overestimated. In general, small portion sizes tended to be overestimated, and large portion sizes underestimated. Older subjects overestimated portion size more often than younger subjects. Excluding butter and margarine, the nutrient content of meals based on estimated portion sizes was on average within +/- 7% of the nutrient content based on the amounts consumed, except for vitamin C (21% overestimate), and for subjects over 65 years (15-20% overestimate for energy and fat). In subjects whose BMI was less than 25 kg/m2, the energy and fat contents of meals calculated from food composition tables and based on estimated portion size (excluding butter and margarine) were 5-10% greater than the nutrient content calculated using actual portion size, but for those with BMI 30 kg/m2 or over, the calculated energy and fat contents were underestimated by 2-5%. The correlation of the nutrient content of meals based on actual or estimated portion sizes ranged from 0-84 to 0-96. For energy and eight nutrients, between 69 and 89% subjects were correctly classified into thirds of the distribution of intake using estimated portion size compared with intakes based on actual portion sizes. When 'average' portion sizes (the average weight of each of the foods which the subjects had served themselves) were used in place of the estimates based on photographs, the number of subjects correctly classified fell to between 60 and 79%. We report for the first time the error associated with conceptualization and the nutrient content of meals when using photographs to estimate food portion size. We conclude that photographs depicting a range of portion sizes are a useful aid to the estimation of portion size. Misclassification of subjects according to their nutrient intake from one meal is reduced when photographs are used to estimate portion size, compared with the use of average portions. Age, sex, BMI and portion size are all potentially important confounders when estimating food consumption or nutrient intake using photographs.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik
2015-12-01
Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.
Deutsch, Madeline B
2016-06-01
An accurate estimate of the number of transgender and gender nonconforming people is essential to inform policy and funding priorities and decisions. Historical reports of population sizes of 1 in 4000 to 1 in 50,000 have been based on clinical populations and likely underestimate the size of the transgender population. More recent population-based studies have found a 10- to 100-fold increase in population size. Studies that estimate population size should be population based, employ the two-step method to allow for collection of both gender identity and sex assigned at birth, and include measures to capture the range of transgender people with nonbinary gender identities.
Estimation of the size of the female sex worker population in Rwanda using three different methods
Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2014-01-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture–recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture–recapture method was 3205 (95% confidence interval: 2998–3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916–2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture–recapture, enumeration, and multiplier methods. The capture–recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. PMID:25336306
Estimation of the size of the female sex worker population in Rwanda using three different methods.
Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2015-10-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. © The Author(s) 2015.
Anzehaee, Mohammad Mousavi; Haeri, Mohammad
2011-07-01
New estimators are designed based on the modified force balance model to estimate the detaching droplet size, detached droplet size, and mean value of droplet detachment frequency in a gas metal arc welding process. The proper droplet size for the process to be in the projected spray transfer mode is determined based on the modified force balance model and the designed estimators. Finally, the droplet size and the melting rate are controlled using two proportional-integral (PI) controllers to achieve high weld quality by retaining the transfer mode and generating appropriate signals as inputs of the weld geometry control loop. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.
2018-06-01
We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.
Estimating population size with correlated sampling unit estimates
David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey
2003-01-01
Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Population estimates of extended family structure and size.
Garceau, Anne; Wideroff, Louise; McNeel, Timothy; Dunn, Marsha; Graubard, Barry I
2008-01-01
Population-based estimates of biological family size can be useful for planning genetic studies, assessing how distributions of relatives affect disease associations with family history and estimating prevalence of potential family support. Mean family size per person is estimated from a population-based telephone survey (n = 1,019). After multivariate adjustment for demographic variables, older and non-White respondents reported greater mean numbers of total, first- and second-degree relatives. Females reported more total and first-degree relatives, while less educated respondents reported more second-degree relatives. Demographic differences in family size have implications for genetic research. Therefore, periodic collection of family structure data in representative populations would be useful. Copyright 2008 S. Karger AG, Basel.
Estimating number and size of forest patches from FIA plot data
Mark D. Nelson; Andrew J. Lister; Mark H. Hansen
2009-01-01
Forest inventory and analysis (FIA) annual plot data provide for estimates of forest area, type, volume, growth, and other attributes. Estimates of forest landscape metrics, such as those describing abundance, size, and shape of forest patches, however, typically are not derived from FIA plot data but from satellite image-based land cover maps. Associating image-based...
Generalizations and Extensions of the Probability of Superiority Effect Size Estimator
ERIC Educational Resources Information Center
Ruscio, John; Gera, Benjamin Lee
2013-01-01
Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…
Gordon Luikart; Nils Ryman; David A. Tallmon; Michael K. Schwartz; Fred W. Allendorf
2010-01-01
Population census size (NC) and effective population sizes (Ne) are two crucial parameters that influence population viability, wildlife management decisions, and conservation planning. Genetic estimators of both NC and Ne are increasingly widely used because molecular markers are increasingly available, statistical methods are improving rapidly, and genetic estimators...
2014-01-01
Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244
Unfolding sphere size distributions with a density estimator based on Tikhonov regularization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weese, J.; Korat, E.; Maier, D.
1997-12-01
This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.
Mclean, Elizabeth L; Forrester, Graham E
2018-04-01
We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more negative majority view. Although fishers' and scientific estimates of size at maturity and maximum size parameters sometimes differed, the fact that fishers make routine quantitative assessments of maturity and body size suggests potential for future collaborative monitoring efforts to generate estimates usable by scientists and meaningful to fishers. © 2017 by the Ecological Society of America.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Thompson, J K; Dolce, J J
1989-05-01
Thirty-two asymptomatic college females were assessed on multiple aspects of body image. Subjects' estimation of the size of three body sites (waist, hips, thighs) was affected by instructional protocol. Emotional ratings, based on how they "felt" about their body, elicited ratings that were larger than actual and ideal size measures. Size ratings based on rational instructions were no different from actual sizes, but were larger than ideal ratings. There were no differences between actual and ideal sizes. The results are discussed with regard to methodological issues involved in body image research. In addition, a working hypothesis that differentiates affective/emotional from cognitive/rational aspects of body size estimation is offered to complement current theories of body image. Implications of the findings for the understanding of body image and its relationship to eating disorders are discussed.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
NASA Astrophysics Data System (ADS)
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
Sulaberidze, Lela; Mirzazadeh, Ali; Chikovani, Ivdity; Shengelia, Natia; Tsereteli, Nino; Gotsadze, George
2016-01-01
An accurate estimation of the population size of men who have sex with men (MSM) is critical to the success of HIV program planning and to monitoring of the response to epidemic as a whole, but is quite often missing. In this study, our aim was to estimate the population size of MSM in Tbilisi, Georgia and compare it with other estimates in the region. In the absence of a gold standard for estimating the population size of MSM, this study reports a range of methods, including network scale-up, mobile/web apps multiplier, service and unique object multiplier, network-based capture-recapture, Handcock RDS-based and Wisdom of Crowds methods. To apply all these methods, two surveys were conducted: first, a household survey among 1,015 adults from the general population, and second, a respondent driven sample of 210 MSM. We also conducted a literature review of MSM size estimation in Eastern European and Central Asian countries. The median population size of MSM generated from all previously mentioned methods was estimated to be 5,100 (95% Confidence Interval (CI): 3,243~9,088). This corresponds to 1.42% (95%CI: 0.9%~2.53%) of the adult male population in Tbilisi. Our size estimates of the MSM population (1.42% (95%CI: 0.9%~2.53%) of the adult male population in Tbilisi) fall within ranges reported in other Eastern European and Central Asian countries. These estimates can provide valuable information for country level HIV prevention program planning and evaluation. Furthermore, we believe, that our results will narrow the gap in data availability on the estimates of the population size of MSM in the region.
Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An
2005-11-15
Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu
A Simple Effect Size Estimator for Single Case Designs Using WinBUGS
ERIC Educational Resources Information Center
Rindskopf, David; Shadish, William; Hedges, Larry
2012-01-01
Data from single case designs (SCDs) have traditionally been analyzed by visual inspection rather than statistical models. As a consequence, effect sizes have been of little interest. Lately, some effect-size estimators have been proposed, but most are either (i) nonparametric, and/or (ii) based on an analogy incompatible with effect sizes from…
ERIC Educational Resources Information Center
Ruscio, John; Mullen, Tara
2012-01-01
It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…
Improved population estimates through the use of auxiliary information
Johnson, D.H.; Ralph, C.J.; Scott, J.M.
1981-01-01
When estimating the size of a population of birds, the investigator may have, in addition to an estimator based on a statistical sample, information on one of several auxiliary variables, such as: (1) estimates of the population made on previous occasions, (2) measures of habitat variables associated with the size of the population, and (3) estimates of the population sizes of other species that correlate with the species of interest. Although many studies have described the relationships between each of these kinds of data and the population size to be estimated, very little work has been done to improve the estimator by incorporating such auxiliary information. A statistical methodology termed 'empirical Bayes' seems to be appropriate to these situations. The potential that empirical Bayes methodology has for improved estimation of the population size of the Mallard (Anas platyrhynchos) is explored. In the example considered, three empirical Bayes estimators were found to reduce the error by one-fourth to one-half of that of the usual estimator.
Estimation of portion size in children's dietary assessment: lessons learnt.
Foster, E; Adamson, A J; Anderson, A S; Barton, K L; Wrieden, W L
2009-02-01
Assessing the dietary intake of young children is challenging. In any 1 day, children may have several carers responsible for providing them with their dietary requirements, and once children reach school age, traditional methods such as weighing all items consumed become impractical. As an alternative to weighed records, food portion size assessment tools are available to assist subjects in estimating the amounts of foods consumed. Existing food photographs designed for use with adults and based on adult portion sizes have been found to be inappropriate for use with children. This article presents a review and summary of a body of work carried out to improve the estimation of portion sizes consumed by children. Feasibility work was undertaken to determine the accuracy and precision of three portion size assessment tools; food photographs, food models and a computer-based Interactive Portion Size Assessment System (IPSAS). These tools were based on portion sizes served to children during the National Diet and Nutrition Survey. As children often do not consume all of the food served to them, smaller portions were included in each tool for estimation of leftovers. The tools covered 22 foods, which children commonly consume. Children were served known amounts of each food and leftovers were recorded. They were then asked to estimate both the amount of food that they were served and the amount of any food leftover. Children were found to estimate food portion size with an accuracy approaching that of adults using both the food photographs and IPSAS. Further development is underway to increase the number of food photographs and to develop IPSAS to cover a much wider range of foods and to validate the use of these tools in a 'real life' setting.
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk
2018-01-01
Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.
Zimmerman, Guthrie S.; Sauer, John; Boomer, G. Scott; Devers, Patrick K.; Garrettson, Pamela R.
2017-01-01
The U.S. Fish and Wildlife Service (USFWS) uses data from the North American Breeding Bird Survey (BBS) to assist in monitoring and management of some migratory birds. However, BBS analyses provide indices of population change rather than estimates of population size, precluding their use in developing abundance-based objectives and limiting applicability to harvest management. Wood Ducks (Aix sponsa) are important harvested birds in the Atlantic Flyway (AF) that are difficult to detect during aerial surveys because they prefer forested habitat. We integrated Wood Duck count data from a ground-plot survey in the northeastern U.S. with AF-wide BBS, banding, parts collection, and harvest data to derive estimates of population size for the AF. Overlapping results between the smaller-scale intensive ground-plot survey and the BBS in the northeastern U.S. provided a means for scaling BBS indices to the breeding population size estimates. We applied these scaling factors to BBS results for portions of the AF lacking intensive surveys. Banding data provided estimates of annual survival and harvest rates; the latter, when combined with parts-collection data, provided estimates of recruitment. We used the harvest data to estimate fall population size. Our estimates of breeding population size and variability from the integrated population model (N̄ = 0.99 million, SD = 0.04) were similar to estimates of breeding population size based solely on data from the AF ground-plot surveys and the BBS (N̄ = 1.01 million, SD = 0.04) from 1998 to 2015. Integrating BBS data with other data provided reliable population size estimates for Wood Ducks at a scale useful for harvest and habitat management in the AF, and allowed us to derive estimates of important demographic parameters (e.g., seasonal survival rates, sex ratio) that were not directly informed by data.
Modeling and Optimization for Morphing Wing Concept Generation
NASA Technical Reports Server (NTRS)
Skillen, Michael D.; Crossley, William A.
2007-01-01
This report consists of two major parts: 1) the approach to develop morphing wing weight equations, and 2) the approach to size morphing aircraft. Combined, these techniques allow the morphing aircraft to be sized with estimates of the morphing wing weight that are more credible than estimates currently available; aircraft sizing results prior to this study incorporated morphing wing weight estimates based on general heuristics for fixed-wing flaps (a comparable "morphing" component) but, in general, these results were unsubstantiated. This report will show that the method of morphing wing weight prediction does, in fact, drive the aircraft sizing code to different results and that accurate morphing wing weight estimates are essential to credible aircraft sizing results.
Age-related reproduction in striped skunks (Mephitis mephitis) in the upper Midwest
Greenwood, Raymond J.; Sargeant, Alan B.
1994-01-01
Reproductive data from the upper Midwest are meager for the striped skunk (Mephitis mephitis), a common North American carnivore. We provide data on some age-related reproductive attributes of 178 female striped skunks collected at 19 sites in eastcentral North Dakota and westcentral Minnesota in 1979–1981 and 1987–1991. Seventy-four percent of the females were 1 year old; 95% were pregnant or parous when collected. Thirteen of 873 (1.5%) embryos in 123 pregnant females were being resorbed. The overall mean (±1 SE) litter size estimated from live embryos was 7.2 ± 0.4. Means of litter-size estimates were similar for females ≥1 year old, but annual estimates of litter size differed among years for all females combined. For females from the interval 1979–1981 and 1990, the mean implantation date based on embryo size was 4 March (±1.6 days). Potential litters were composed of a mean of 55 ± 3% females. Estimates of litter size based on counts of corpora lutea averaged 0.9 young per female less than estimates for the same females based on counts of live embryos, indicating that some skunks may have produced polyovular follicles or identical twins.
Variation in clutch size in relation to nest size in birds
Møller, Anders P; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Góźdź, Iga; Grégoire, Arnaud; Gustafsson, Lars; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Järvinen, Antero; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morales-Fernaz, Judith; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Pimentel, Carla S; Pinxten, Rianne; Priedniece, Ilze; Quidoz, Marie-Claude; Remeš, Vladimir; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; da Silva, Luís P; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Török, János; Tryjanowski, Piotr; van Noordwijk, Arie J; von Numers, Mikael; Walankiewicz, Wiesław; Lambrechts, Marcel M
2014-01-01
Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited to small-scale studies performed over short time periods. Here, we quantified the relationship between clutch size and nest size, using an exhaustive database of 116 slope estimates based on 17,472 nests of 21 species of hole and non-hole-nesting birds. There was a significant, positive relationship between clutch size and the base area of the nest box or the nest, and this relationship did not differ significantly between open nesting and hole-nesting species. The slope of the relationship showed significant intraspecific and interspecific heterogeneity among four species of secondary hole-nesting species, but also among all 116 slope estimates. The estimated relationship between clutch size and nest box base area in study sites with more than a single size of nest box was not significantly different from the relationship using studies with only a single size of nest box. The slope of the relationship between clutch size and nest base area in different species of birds was significantly negatively related to minimum base area, and less so to maximum base area in a given study. These findings are consistent with the hypothesis that bird species have a general reaction norm reflecting the relationship between nest size and clutch size. Further, they suggest that scientists may influence the clutch size decisions of hole-nesting birds through the provisioning of nest boxes of varying sizes. PMID:25478150
Variation in clutch size in relation to nest size in birds.
Møller, Anders P; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Góźdź, Iga; Grégoire, Arnaud; Gustafsson, Lars; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Järvinen, Antero; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morales-Fernaz, Judith; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Pimentel, Carla S; Pinxten, Rianne; Priedniece, Ilze; Quidoz, Marie-Claude; Remeš, Vladimir; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; da Silva, Luís P; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Török, János; Tryjanowski, Piotr; van Noordwijk, Arie J; von Numers, Mikael; Walankiewicz, Wiesław; Lambrechts, Marcel M
2014-09-01
Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited to small-scale studies performed over short time periods. Here, we quantified the relationship between clutch size and nest size, using an exhaustive database of 116 slope estimates based on 17,472 nests of 21 species of hole and non-hole-nesting birds. There was a significant, positive relationship between clutch size and the base area of the nest box or the nest, and this relationship did not differ significantly between open nesting and hole-nesting species. The slope of the relationship showed significant intraspecific and interspecific heterogeneity among four species of secondary hole-nesting species, but also among all 116 slope estimates. The estimated relationship between clutch size and nest box base area in study sites with more than a single size of nest box was not significantly different from the relationship using studies with only a single size of nest box. The slope of the relationship between clutch size and nest base area in different species of birds was significantly negatively related to minimum base area, and less so to maximum base area in a given study. These findings are consistent with the hypothesis that bird species have a general reaction norm reflecting the relationship between nest size and clutch size. Further, they suggest that scientists may influence the clutch size decisions of hole-nesting birds through the provisioning of nest boxes of varying sizes.
McCrea, C; Neil, W J; Flanigan, J W; Summerfield, A B
1988-08-01
In this study a new modified videosystem, designed for measuring body-image, was evaluated alongside the major size-estimation measure, namely, the visual size-estimation apparatus. The advantages afforded by a videosystem which allows independent adjustment of size and height/width proportions were highlighted, and its validity and reliability were examined, based on estimates made by obese, normal weight, and pregnant groups.
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.
Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J
2018-07-01
This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Radiance Assimilation Shows Promise for Snowpack Characterization: A 1-D Case Study
NASA Technical Reports Server (NTRS)
Durand, Michael; Kim, Edward; Margulis, Steve
2008-01-01
We demonstrate an ensemble-based radiometric data assimilation (DA) methodology for estimating snow depth and snow grain size using ground-based passive microwave (PM) observations at 18.7 and 36.5 GHz collected during the NASA CLPX-1, March 2003, Colorado, USA. A land surface model was used to develop a prior estimate of the snowpack states, and a radiative transfer model was used to relate the modeled states to the observations. Snow depth bias was -53.3 cm prior to the assimilation, and -7.3 cm after the assimilation. Snow depth estimated by a non-DA-based retrieval algorithm using the same PM data had a bias of -18.3 cm. The sensitivity of the assimilation scheme to the grain size uncertainty was evaluated; over the range of grain size uncertainty tested, the posterior snow depth estimate bias ranges from -2.99 cm to -9.85 cm, which is uniformly better than both the prior and retrieval estimates. This study demonstrates the potential applicability of radiometric DA at larger scales.
ERIC Educational Resources Information Center
Kan, Man Yee
2008-01-01
This article compares stylised (questionnaire-based) estimates and diary-based estimates of housework time collected from the same respondents. Data come from the Home On-line Study (1999-2001), a British national household survey that contains both types of estimates (sample size = 632 men and 666 women). It shows that the gap between the two…
Gregory, T Ryan; Nathwani, Paula; Bonnett, Tiffany R; Huber, Dezene P W
2013-09-01
A study was undertaken to evaluate both a pre-existing method and a newly proposed approach for the estimation of nuclear genome sizes in arthropods. First, concerns regarding the reliability of the well-established method of flow cytometry relating to impacts of rearing conditions on genome size estimates were examined. Contrary to previous reports, a more carefully controlled test found negligible environmental effects on genome size estimates in the fly Drosophila melanogaster. Second, a more recently touted method based on quantitative real-time PCR (qPCR) was examined in terms of ease of use, efficiency, and (most importantly) accuracy using four test species: the flies Drosophila melanogaster and Musca domestica and the beetles Tribolium castaneum and Dendroctonus ponderosa. The results of this analysis demonstrated that qPCR has the tendency to produce substantially different genome size estimates from other established techniques while also being far less efficient than existing methods.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
The effects of sample size on population genomic analyses--implications for the tests of neutrality.
Subramanian, Sankar
2016-02-20
One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
DARK MATTER MASS FRACTION IN LENS GALAXIES: NEW ESTIMATES FROM MICROLENSING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiménez-Vicente, J.; Mediavilla, E.; Kochanek, C. S.
2015-02-01
We present a joint estimate of the stellar/dark matter mass fraction in lens galaxies and the average size of the accretion disk of lensed quasars based on microlensing measurements of 27 quasar image pairs seen through 19 lens galaxies. The Bayesian estimate for the fraction of the surface mass density in the form of stars is α = 0.21 ± 0.14 near the Einstein radius of the lenses (∼1-2 effective radii). The estimate for the average accretion disk size is R{sub 1/2}=7.9{sub −2.6}{sup +3.8}√(M/0.3 M{sub ⊙}) light days. The fraction of mass in stars at these radii is significantly largermore » than previous estimates from microlensing studies assuming quasars were point-like. The corresponding local dark matter fraction of 79% is in good agreement with other estimates based on strong lensing or kinematics. The size of the accretion disk inferred in the present study is slightly larger than previous estimates.« less
Nichols, James D.; Pollock, Kenneth H.; Hines, James E.
1984-01-01
The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.
Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation
NASA Astrophysics Data System (ADS)
Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads
2016-03-01
Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.
Junno, Juho-Antti; Niskanen, Markku; Maijanen, Heli; Holt, Brigitte; Sladek, Vladimir; Niinimäki, Sirpa; Berner, Margit
2018-02-01
The stature/bi-iliac breadth method provides reasonably precise, skeletal frame size (SFS) based body mass (BM) estimations across adults as a whole. In this study, we examine the potential effects of age changes in anthropometric dimensions on the estimation accuracy of SFS-based body mass estimation. We use anthropometric data from the literature and our own skeletal data from two osteological collections to study effects of age on stature, bi-iliac breadth, body mass, and body composition, as they are major components behind body size and body size estimations. We focus on males, as relevant longitudinal data are based on male study samples. As a general rule, lean body mass (LBM) increases through adolescence and early adulthood until people are aged in their 30s or 40s, and starts to decline in the late 40s or early 50s. Fat mass (FM) tends to increase until the mid-50s and declines thereafter, but in more mobile traditional societies it may decline throughout adult life. Because BM is the sum of LBM and FM, it exhibits a curvilinear age-related pattern in all societies. Skeletal frame size is based on stature and bi-iliac breadth, and both of those dimensions are affected by age. Skeletal frame size based body mass estimation tends to increase throughout adult life in both skeletal and anthropometric samples because an age-related increase in bi-iliac breadth more than compensates for an age-related stature decline commencing in the 30s or 40s. Combined with the above-mentioned curvilinear BM change, this results in curvilinear estimation bias. However, for simulations involving low to moderate percent body fat, the stature/bi-iliac method works well in predicting body mass in younger and middle-aged adults. Such conditions are likely to have applied to most human paleontological and archaeological samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
Memory versus perception of body size in patients with anorexia nervosa and healthy controls.
Øverås, Maria; Kapstad, Hilde; Brunborg, Cathrine; Landrø, Nils Inge; Lask, Bryan
2014-03-01
The objective of this study was to compare body size estimation based on memory versus perception, in patients with anorexia nervosa (AN) and healthy controls, adjusting for possible confounders. Seventy-one women (AN: 37, controls: 35), aged 14-29 years, were assessed with a computerized body size estimation morphing program. Information was gathered on depression, anxiety, time since last meal, weight and height. Results showed that patients overestimated their body size significantly more than controls, both in the memory and perception condition. Further, patients overestimated their body size significantly more when estimation was based on perception than memory. When controlling for anxiety, the difference between patients and controls no longer reached significance. None of the other confounders contributed significantly to the model. The results suggest that anxiety plays a role in overestimation of body size in AN. This finding might inform treatment, suggesting that more focus should be aimed at the underlying anxiety. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.
NASA Astrophysics Data System (ADS)
Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo
2016-07-01
The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.
Foster, E; Matthews, J N S; Lloyd, J; Marshall, L; Mathers, J C; Nelson, M; Barton, K L; Wrieden, W L; Cornelissen, P; Harris, J; Adamson, A J
2008-01-01
A number of methods have been developed to assist subjects in providing an estimate of portion size but their application in improving portion size estimation by children has not been investigated systematically. The aim was to develop portion size assessment tools for use with children and to assess the accuracy of children's estimates of portion size using the tools. The tools were food photographs, food models and an interactive portion size assessment system (IPSAS). Children (n 201), aged 4-16 years, were supplied with known quantities of food to eat, in school. Food leftovers were weighed. Children estimated the amount of each food using each tool, 24 h after consuming the food. The age-specific portion sizes represented were based on portion sizes consumed by children in a national survey. Significant differences were found between the accuracy of estimates using the three tools. Children of all ages performed well using the IPSAS and food photographs. The accuracy and precision of estimates made using the food models were poor. For all tools, estimates of the amount of food served were more accurate than estimates of the amount consumed. Issues relating to reporting of foods left over which impact on estimates of the amounts of foods actually consumed require further study. The IPSAS has shown potential for assessment of dietary intake with children. Before practical application in assessment of dietary intake of children the tool would need to be expanded to cover a wider range of foods and to be validated in a 'real-life' situation.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy
2011-03-01
To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ezoe, Satoshi; Morooka, Takeo; Noda, Tatsuya; Sabin, Miriam Lewis; Koike, Soichi
2012-01-01
Men who have sex with men (MSM) are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel) they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.
Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild
Broell, Franziska; Taggart, Christopher T.
2015-01-01
This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
Lexical Frequency Profiles and Zipf's Law
ERIC Educational Resources Information Center
Edwards, Roderick; Collins, Laura
2011-01-01
Laufer and Nation (1995) proposed that the Lexical Frequency Profile (LFP) can estimate the size of a second-language writer's productive vocabulary. Meara (2005) questioned the sensitivity and the reliability of LFPs for estimating vocabulary sizes, based on the results obtained from probabilistic simulations of LFPs. However, the underlying…
Chaudry, Beenish Moalla; Connelly, Kay; Siek, Katie A; Welch, Janet L
2013-12-01
Chronically ill people, especially those with low literacy skills, often have difficulty estimating portion sizes of liquids to help them stay within their recommended fluid limits. There is a plethora of mobile applications that can help people monitor their nutritional intake but unfortunately these applications require the user to have high literacy and numeracy skills for portion size recording. In this paper, we present two studies in which the low- and the high-fidelity versions of a portion size estimation interface, designed using the cognitive strategies adults employ for portion size estimation during diet recall studies, was evaluated by a chronically ill population with varying literacy skills. The low fidelity interface was evaluated by ten patients who were all able to accurately estimate portion sizes of various liquids with the interface. Eighteen participants did an in situ evaluation of the high-fidelity version incorporated in a diet and fluid monitoring mobile application for 6 weeks. Although the accuracy of the estimation cannot be confirmed in the second study but the participants who actively interacted with the interface showed better health outcomes by the end of the study. Based on these findings, we provide recommendations for designing the next iteration of an accurate and low literacy-accessible liquid portion size estimation mobile interface.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
Ansmann, Ina C.; Lanyon, Janet M.; Seddon, Jennifer M.; Parra, Guido J.
2013-01-01
Moreton Bay, Queensland, Australia is an area of high biodiversity and conservation value and home to two sympatric sub-populations of Indo-Pacific bottlenose dolphins (Tursiops aduncus). These dolphins live in close proximity to major urban developments. Successful management requires information regarding their abundance. Here, we estimate total and effective population sizes of bottlenose dolphins in Moreton Bay using photo-identification and genetic data collected during boat-based surveys in 2008–2010. Abundance (N) was estimated using open population mark-recapture models based on sighting histories of distinctive individuals. Effective population size (Ne) was estimated using the linkage disequilibrium method based on nuclear genetic data at 20 microsatellite markers in skin samples, and corrected for bias caused by overlapping generations (Nec). A total of 174 sightings of dolphin groups were recorded and 365 different individuals identified. Over the whole of Moreton Bay, a population size N of 554±22.2 (SE) (95% CI: 510–598) was estimated. The southern bay sub-population was small at an estimated N = 193±6.4 (SE) (95% CI: 181–207), while the North sub-population was more numerous, with 446±56 (SE) (95% CI: 336–556) individuals. The small estimated effective population size of the southern sub-population (Nec = 56, 95% CI: 33–128) raises conservation concerns. A power analysis suggested that to reliably detect small (5%) declines in size of this population would require substantial survey effort (>4 years of annual mark-recapture surveys) at the precision levels achieved here. To ensure that ecological as well as genetic diversity within this population of bottlenose dolphins is preserved, we consider that North and South sub-populations should be treated as separate management units. Systematic surveys over smaller areas holding locally-adapted sub-populations are suggested as an alternative method for increasing ability to detect abundance trends. PMID:23755197
Jeffery, Nicholas W; Gregory, T Ryan
2014-10-01
Crustaceans are enormously diverse both phylogenetically and ecologically, but they remain substantially underrepresented in the existing genome size database. An expansion of this dataset could be facilitated if it were possible to obtain genome size estimates from ethanol-preserved specimens. In this study, two tests were performed in order to assess the reliability of genome size data generated using preserved material. First, the results of estimates based on flash-frozen versus ethanol-preserved material were compared across 37 species of crustaceans that differ widely in genome size. Second, a comparison was made of specimens from a single species that had been stored in ethanol for 1-14 years. In both cases, the use of gill tissue in Feulgen image analysis densitometry proved to be a very viable approach. This finding is of direct relevance to both new studies of field-collected crustaceans as well as potential studies based on existing collections. © 2014 International Society for Advancement of Cytometry.
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
NASA Astrophysics Data System (ADS)
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
Economic Effects of Increased Control Zone Sizes in Conflict Resolution
NASA Technical Reports Server (NTRS)
Datta, Koushik
1998-01-01
A methodology for estimating the economic effects of different control zone sizes used in conflict resolutions between aircraft is presented in this paper. The methodology is based on estimating the difference in flight times of aircraft with and without the control zone, and converting the difference into a direct operating cost. Using this methodology the effects of increased lateral and vertical control zone sizes are evaluated.
A Size-Distance Scaling Demonstration Based on the Holway-Boring Experiment
ERIC Educational Resources Information Center
Gallagher, Shawn P.; Hoefling, Crystal L.
2013-01-01
We explored size-distance scaling with a demonstration based on the classic Holway-Boring experiment. Undergraduate psychology majors estimated the sizes of two glowing paper circles under two conditions. In the first condition, the environment was dark and, with no depth cues available, participants ranked the circles according to their angular…
Flaw depth sizing using guided waves
NASA Astrophysics Data System (ADS)
Cobb, Adam C.; Fisher, Jay L.
2016-02-01
Guided wave inspection technology is most often applied as a survey tool for pipeline inspection, where relatively low frequency ultrasonic waves, compared to those used in conventional ultrasonic nondestructive evaluation (NDE) methods, propagate along the structure; discontinuities cause a reflection of the sound back to the sensor for flaw detection. Although the technology can be used to accurately locate a flaw over long distances, the flaw sizing performance, especially for flaw depth estimation, is much poorer than other, local NDE approaches. Estimating flaw depth, as opposed to other parameters, is of particular interest for failure analysis of many structures. At present, most guided wave technologies estimate the size of the flaw based on the reflected signal amplitude from the flaw compared to a known geometry reflection, such as a circumferential weld in a pipeline. This process, however, requires many assumptions to be made, such as weld geometry and flaw shape. Furthermore, it is highly dependent on the amplitude of the flaw reflection, which can vary based on many factors, such as attenuation and sensor installation. To improve sizing performance, especially depth estimation, and do so in a way that is not strictly amplitude dependent, this paper describes an approach to estimate the depth of a flaw based on a multimodal analysis. This approach eliminates the need of using geometric reflections for calibration and can be used for both pipeline and plate inspection applications. To verify the approach, a test set was manufactured on plate specimens with flaws of different widths and depths ranging from 5% to 100% of total wall thickness; 90% of these flaws were sized to within 15% of their true value. A description of the initial multimodal sizing strategy and results will be discussed.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
NASA Astrophysics Data System (ADS)
Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.
2017-05-01
Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
Comparing different stimulus configurations for population receptive field mapping in human fMRI
Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel
2015-01-01
Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620
Evaluation of hydraulic conductivities calculated from multi-port permeameter measurements
Wolf, Steven H.; Celia, Michael A.; Hess, Kathryn M.
1991-01-01
A multiport permeameter was developed for use in estimating hydraulic conductivity over intact sections of aquifer core using the core liner as the permeameter body. Six cores obtained from one borehole through the upper 9 m of a stratified glacial-outwash aquifer were used to evaluate the reliability of the permeameter. Radiographs of the cores were used to assess core integrity and to locate 5- to 10-cm sections of similar grain size for estimation of hydraulic conductivity. After extensive testing of the permeameter, hydraulic conductivities were determined for 83 sections of the six cores. Other measurement techniques included permeameter measurements on repacked sections of core, estimates based on grain-size analyses, and estimates based on borehole flowmeter measurements. Permeameter measurements of 33 sections of core that had been extruded, homogenized, and repacked did not differ significantly from the original measurements. Hydraulic conductivities estimated from grain-size distributions were slightly higher than those calculated from permeameter measurements; the significance of the difference depended on the estimating equation used. Hydraulic conductivities calculated from field measurements, using a borehole flowmeter in the borehole from which the cores were extracted, were significantly higher than those calculated from laboratory measurements and more closely agreed with independent estimates of hydraulic conductivity based on tracer movement near the borehole. This indicates that hydraulic conductivities based on laboratory measurements of core samples may underestimate actual field hydraulic conductivities in this type of stratified glacial-outwash aquifer.
Estimating the ratio of pond size to irrigated soybeans land in Mississippi: A case study
USDA-ARS?s Scientific Manuscript database
Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of pond size to irrigated crop land based on pond matric and its hydrological conditions. Knowledge of this ra...
The use of SNP data for the monitoring of genetic diversity in cattle breeds
USDA-ARS?s Scientific Manuscript database
LD between SNPs contains information about effective population size. In this study, we investigate the use of genome-wide SNP data for marker based estimation of effective population size for two taurine cattle breeds of Africa and two local cattle breeds of Switzerland. Estimated recombination rat...
Population Size Estimation of Men Who Have Sex with Men through the Network Scale-Up Method in Japan
Ezoe, Satoshi; Morooka, Takeo; Noda, Tatsuya; Sabin, Miriam Lewis; Koike, Soichi
2012-01-01
Background Men who have sex with men (MSM) are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. Methods and Findings An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel) they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. Conclusions The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods. PMID:22563366
Missing portion sizes in FFQ--alternatives to use of standard portions.
Køster-Rasmussen, Rasmus; Siersma, Volkert; Halldorsson, Thorhallur I; de Fine Olivarius, Niels; Henriksen, Jan E; Heitmann, Berit L
2015-08-01
Standard portions or substitution of missing portion sizes with medians may generate bias when quantifying the dietary intake from FFQ. The present study compared four different methods to include portion sizes in FFQ. We evaluated three stochastic methods for imputation of portion sizes based on information about anthropometry, sex, physical activity and age. Energy intakes computed with standard portion sizes, defined as sex-specific medians (median), or with portion sizes estimated with multinomial logistic regression (MLR), 'comparable categories' (Coca) or k-nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). The Danish Health Examination Survey 2007-2008. The study included 3728 adults with complete portion size data. Compared with the reference, the root-mean-square errors of the mean daily total energy intake (in kJ) computed with portion sizes estimated by the four methods were (men; women): median (1118; 1061), MLR (1060; 1051), Coca (1230; 1146), KNN (1281; 1181). The equivalent biases (mean error) were (in kJ): median (579; 469), MLR (248; 178), Coca (234; 188), KNN (-340; 218). The methods MLR and Coca provided the best agreement with the reference. The stochastic methods allowed for estimation of meaningful portion sizes by conditioning on information about physiology and they were suitable for multiple imputation. We propose to use MLR or Coca to substitute missing portion size values or when portion sizes needs to be included in FFQ without portion size data.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Borque, Paloma; Luke, Edward; Kollias, Pavlos
2016-05-27
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borque, Paloma; Luke, Edward; Kollias, Pavlos
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Estimation and applications of size-based distributions in forestry
Jeffrey H. Gove
2003-01-01
Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
A simple method for estimating the size of nuclei on fractal surfaces
NASA Astrophysics Data System (ADS)
Zeng, Qiang
2017-10-01
Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Rosenberger, Amanda E.; Dunham, Jason B.
2005-01-01
Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Matuszewski, Szymon; Frątczak-Łagiewska, Katarzyna
2018-02-05
Insects colonizing human or animal cadavers may be used to estimate post-mortem interval (PMI) usually by aging larvae or pupae sampled on a crime scene. The accuracy of insect age estimates in a forensic context is reduced by large intraspecific variation in insect development time. Here we test the concept that insect size at emergence may be used to predict insect physiological age and accordingly to improve the accuracy of age estimates in forensic entomology. Using results of laboratory study on development of forensically-useful beetle Creophilus maxillosus (Linnaeus, 1758) (Staphylinidae) we demonstrate that its physiological age at emergence [i.e. thermal summation value (K) needed for emergence] fall with an increase of beetle size. In the validation study it was found that K estimated based on the adult insect size was significantly closer to the true K as compared to K from the general thermal summation model. Using beetle length at emergence as a predictor variable and male or female specific model regressing K against beetle length gave the most accurate predictions of age. These results demonstrate that size of C. maxillosus at emergence improves accuracy of age estimates in a forensic context.
A review of the population estimation approach of the North American landbird conservation plan
Thogmartin, Wayne E.; Howe, Frank P.; James, Frances C.; Johnson, Douglas H.; Reed, Eric T.; Sauer, John R.; Thompson, Frank R.
2006-01-01
As part of their development of a continental plan for monitoring landbirds (Rich et al. 2004), Partners in Flight (PIF) applied a new method to make preliminary estimates of population size for all 448 species of landbirds present in the continental United States and Canada (Table 1). Estimation of the global population size of North American landbirds was intended to (1) identify the degree of vulnerability of each species, (2) provide estimates of the current population size for each species, and (3) provide a starting point for estimating population sizes in states, provinces, territories, and Bird Conservation Regions (Rich et al. 2004). A method proposed by Rosenberg and Blancher (2005) was used to derive population estimates from available survey data. To enhance the credibility of these estimates, PIF organized a review of the methodology used to estimate North American landbird population sizes. A planning committee selected members from the ornithological and biometrical communities (hereafter “the panel”), with the aim of selecting individuals from academia, state natural-resource agencies, and the U.S. and Canadian federal governments, including the Canadian Wildlife Service, the U.S. Geological Survey, and the U.S. Department of Agriculture Forest Service.The panel addressed three questions: (1) Were the methods of population estimation proposed by PIF reasonable? (2) What actions could be taken to improve the data or analyses on which the PIF population estimates were based? and (3) How should the PIF population estimates be interpreted?
The determination of total burn surface area: How much difference?
Giretzlehner, M; Dirnberger, J; Owen, R; Haller, H L; Lumenta, D B; Kamolz, L-P
2013-09-01
Burn depth and burn size are crucial determinants for assessing patients suffering from burns. Therefore, a correct evaluation of these factors is optimal for adapting the appropriate treatment in modern burn care. Burn surface assessment is subject to considerable differences among clinicians. This work investigated the accuracy among experts based on conventional surface estimation methods (e.g. "Rule of Palm", "Rule of Nines" or "Lund-Browder Chart"). The estimation results were compared to a computer-based evaluation method. Survey data was collected during one national and one international burn conference. The poll confirmed deviations of burn depth/size estimates of up to 62% in relation to the mean value of all participants. In comparison to the computer-based method, overestimation of up to 161% was found. We suggest introducing improved methods for burn depth/size assessment in clinical routine in order to efficiently allocate and distribute the available resources for practicing burn care. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.
Guthrie Zimmerman,; Sauer, John; Fleming, Kathy; Link, William; Pamela R. Garrettson,
2015-01-01
We combined data from the Atlantic Flyway Breeding Waterfowl Survey (AFBWS) and the North American Breeding Bird Survey (BBS) to estimate the number of wood ducks (Aix sponsa) in the United States portion of the Atlantic Flyway from 1993 to 2013. The AFBWS is a plot-based survey that covers most of the northern and central portions of the Flyway; when analyzed with adjustments for survey time of day effects, these data can be used to estimate population size. The BBS provides an index of wood duck abundance along roadside routes. Although factors influencing change in BBS counts over time can be controlled in BBS analysis, BBS indices alone cannot be used to derive population size estimates. We used AFBWS data to scale BBS indices for Bird Conservation Regions (BCR), basing the scaling factors on the ratio of estimated AFBWS population sizes to regional BBS indices for portions of BCRs that were common to both surveys. We summed scaled BBS results for portions of the Flyway not covered by the AFBWS with AFBWS population estimates to estimate a mean yearly total of 1,295,875 (mean 95% CI: 1,013,940–1,727,922) wood ducks. Scaling factors varied among BCRs from 16.7 to 148.0; the mean scaling factor was 68.9 (mean 95% CI: 53.5–90.9). Flyway-wide, population estimates from the combined analysis were consistent with alternative estimates derived from harvest data, and also provide population estimates within states and BCRs. We recommend their use in harvest and habitat management within the Atlantic Flyway.
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.
From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.
Marden, J H
1994-04-01
Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.
Population size and trend of Yellow-billed Loons in northern Alaska
Earnst, Susan L.; Stehn, R.A.; Platte, Robert; Larned, W.W.; Mallek, E.J.
2005-01-01
The Yellow-billed Loon (Gavia adamsii) is of conservation concern due to its restricted range, small population size, specific habitat requirements, and perceived threats to its breeding and wintering habitat. Within the U.S., this species breeds almost entirely within the National Petroleum Reserve-Alaska, nearly all of which is open, or proposed to be opened, for oil development. Rigorous estimates of Yellow-billed Loon population size and trend are lacking but essential for informed conservation. We used two annual aerial waterfowl surveys, conducted 1986a??2003 and 1992a??2003, to estimate population size and trend on northern Alaskan breeding grounds. In estimating population trend, we used mixed-effects regression models to reduce bias and sampling error associated with improvement in observer skill and annual effects of spring phenology. The estimated population trend on Alaskan breeding grounds since 1986 was near 0 with an estimated annual change of a??0.9% (95% CI of a??3.6% to +1.8%). The estimated population size, averaged over the past 12 years and adjusted by a correction factor based on an intensive, lake-circling, aerial survey method, was 2221 individuals (95% CI of 1206a??3235) in early June and 3369 individuals (95% CI of 1910a??4828) in late June. Based on estimates from other studies of the proportion of loons nesting in a given year, it is likely that <1000 nesting pairs inhabit northern Alaska in most years. The highest concentration of Yellow-billed Loons occurred between the Meade and Ikpikpuk Rivers; and across all of northern Alaska, 53% of recorded sightings occurred within 12% of the area.
Evaluation of solar thermal power plants using economic and performance simulations
NASA Technical Reports Server (NTRS)
El-Gabawali, N.
1980-01-01
An energy cost analysis is presented for central receiver power plants with thermal storage and point focusing power plants with electrical storage. The present approach is based on optimizing the size of the plant to give the minimum energy cost (in mills/kWe hr) of an annual plant energy production. The optimization is done by considering the trade-off between the collector field size and the storage capacity for a given engine size. The energy cost is determined by the plant cost and performance. The performance is estimated by simulating the behavior of the plant under typical weather conditions. Plant capital and operational costs are estimated based on the size and performance of different components. This methodology is translated into computer programs for automatic and consistent evaluation.
Doyle, Jacqueline M; McCormick, Cory R; DeWoody, J Andrew
2011-01-01
Many animals, such as crustaceans, insects, and salamanders, package their sperm into spermatophores, and the number of spermatozoa contained in a spermatophore is relevant to studies of sexual selection and sperm competition. We used two molecular methods, real-time quantitative polymerase chain reaction (RT-qPCR) and spectrophotometry, to estimate sperm numbers from spermatophores. First, we designed gene-specific primers that produced a single amplicon in four species of ambystomatid salamanders. A standard curve generated from cloned amplicons revealed a strong positive relationship between template DNA quantity and cycle threshold, suggesting that RT-qPCR could be used to quantify sperm in a given sample. We then extracted DNA from multiple Ambystoma maculatum spermatophores, performed RT-qPCR on each sample, and estimated template copy numbers (i.e. sperm number) using the standard curve. Second, we used spectrophotometry to determine the number of sperm per spermatophore by measuring DNA concentration relative to the genome size. We documented a significant positive relationship between the estimates of sperm number based on RT-qPCR and those based on spectrophotometry. When these molecular estimates were compared to spermatophore cap size, which in principle could predict the number of sperm contained in the spermatophore, we also found a significant positive relationship between sperm number and spermatophore cap size. This linear model allows estimates of sperm number strictly from cap size, an approach which could greatly simplify the estimation of sperm number in future studies. These methods may help explain variation in fertilization success where sperm competition is mediated by sperm quantity. © 2010 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Batista, G. T.
1984-01-01
A procedure to estimate wheat (Triticum aestivum L) area using sampling technique based on aerial photographs and digital LANDSAT MSS data is developed. Aerial photographs covering 720 square km are visually analyzed. To estimate wheat area, a regression approach is applied using different sample sizes and various sampling units. As the size of sampling unit decreased, the percentage of sampled area required to obtain similar estimation performance also decreased. The lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation is 13.90% using 10 square km as the sampling unit. Wheat area estimation using only aerial photographs is less precise and accurate than those obtained by regression estimation.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
National Stormwater Calculator: Low Impact Development ...
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Tsai, Du-Ming; Chuang, Wei-Che
2017-04-01
Solar power has become an attractive alternative source of energy. The multi-crystalline solar cell has been widely accepted in the market because it has a relatively low manufacturing cost. Multi-crystalline solar wafers with larger grain sizes and fewer grain boundaries are higher quality and convert energy more efficiently than mono-crystalline solar cells. In this article, a new image processing method is proposed for assessing the wafer quality. An adaptive segmentation algorithm based on region growing is developed to separate the closed regions of individual grains. Using the proposed method, the shape and size of each grain in the wafer image can be precisely evaluated. Two measures of average grain size are taken from the literature and modified to estimate the average grain size. The resulting average grain size estimate dictates the quality of the crystalline solar wafers and can be considered a viable quantitative indicator of conversion efficiency.
Sizing gaseous emboli using Doppler embolic signal intensity.
Banahan, Caroline; Hague, James P; Evans, David H; Patel, Rizwan; Ramnarine, Kumar V; Chung, Emma M L
2012-05-01
Extension of transcranial Doppler embolus detection to estimation of bubble size has historically been hindered by difficulties in applying scattering theory to the interpretation of clinical data. This article presents a simplified approach to the sizing of air emboli based on analysis of Doppler embolic signal intensity, by using an approximation to the full scattering theory that can be solved to estimate embolus size. Tests using simulated emboli show that our algorithm is theoretically capable of sizing 90% of "emboli" to within 10% of their true radius. In vitro tests show that 69% of emboli can be sized to within 20% of their true value under ideal conditions, which reduces to 30% of emboli if the beam and vessel are severely misaligned. Our results demonstrate that estimation of bubble size during clinical monitoring could be used to distinguish benign microbubbles from potentially harmful macrobubbles during intraoperative clinical monitoring. Copyright © 2012 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jaiswal, Neeru; Ha, Doan Thi Thu; Kishtawal, C. M.
2018-03-01
Tropical cyclone (TC) is one of the most intense weather hazards, especially for the coastal regions, as it causes huge devastation through gale winds and torrential floods during landfall. Thus, accurate prediction of TC is of great importance to reduce the loss of life and damage to property. Most of the cyclone track prediction model requires size of TC as an important parameter in order to simulate the vortex. TC size is also required in the impact assessment of TC affected regions. In the present work, the size of TCs formed in the North Indian Ocean (NIO) has been estimated using the high resolution surface wind observations from oceansat-2 scatterometer. The estimated sizes of cyclones were compared to the radius of outermost closed isobar (ROCI) values provided by Joint Typhoon warning Center (JTWC) by plotting their histograms and computing the correlation and mean absolute error (MAE). The correlation and MAE between the OSCAT wind based TC size estimation and JTWC-ROCI values was found 0.69 and 33 km, respectively. The results show that the sizes of cyclones estimated by OSCAT winds are in close agreement to the JTWC-ROCI. The ROCI values of JTWC were analyzed to study the variations in the size of tropical cyclones in NIO during different time of the diurnal cycle and intensity stages.
Non-parametric estimation of population size changes from the site frequency spectrum.
Waltoft, Berit Lindum; Hobolth, Asger
2018-06-11
Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.
Roughness Measurement of Dental Materials
NASA Astrophysics Data System (ADS)
Shulev, Assen; Roussev, Ilia; Karpuzov, Simeon; Stoilov, Georgi; Ignatova, Detelina; See, Constantin von; Mitov, Gergo
2016-06-01
This paper presents a roughness measurement of zirconia ceramics, widely used for dental applications. Surface roughness variations caused by the most commonly used dental instruments for intraoral grinding and polishing are estimated. The applied technique is simple and utilizes the speckle properties of the scattered laser light. It could be easily implemented even in dental clinic environment. The main criteria for roughness estimation is the average speckle size, which varies with the roughness of zirconia. The algorithm used for the speckle size estimation is based on the normalized autocorrelation approach.
Gupta, Manan; Joshi, Amitabh; Vidya, T N C
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.
Joshi, Amitabh; Vidya, T. N. C.
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735
Genome survey sequencing of red swamp crayfish Procambarus clarkii.
Shi, Linlin; Yi, Shaokui; Li, Yanhe
2018-06-21
Red swamp crayfish, Procambarus clarkii, presently is an important aquatic commercial species in China. The crayfish is a hot area of research focus, and its genetic improvement is quite urgent for the crayfish aquaculture in China. However, the knowledge of its genomic landscape is limited. In this study, a survey of P. clarkii genome was investigated based on Illumina's Solexa sequencing platform. Meanwhile, its genome size was estimated using flow cytometry. Interestingly, the genome size estimated is about 8.50 Gb by flow cytometry and 1.86 Gb with genome survey sequencing. Based on the assembled genome sequences, total of 136,962 genes and 152,268 exons were predicted, and the predicted genes ranged from 150 to 12,807 bp in length. The survey sequences could help accelerate the progress of gene discovery involved in genetic diversity and evolutionary analysis, even though it could not successfully applied for estimation of P. clarkii genome size.
Flaw characterization through nonlinear ultrasonics and wavelet cross-correlation algorithms
NASA Astrophysics Data System (ADS)
Bunget, Gheorghe; Yee, Andrew; Stewart, Dylan; Rogers, James; Henley, Stanley; Bugg, Chris; Cline, John; Webster, Matthew; Farinholt, Kevin; Friedersdorf, Fritz
2018-04-01
Ultrasonic measurements have become increasingly important non-destructive techniques to characterize flaws found within various in-service industrial components. The prediction of remaining useful life based on fracture analysis depends on the accurate estimation of flaw size and orientation. However, amplitude-based ultrasonic measurements are not able to estimate the plastic zones that exist ahead of crack tips. Estimating the size of the plastic zone is an advantage since some flaws may propagate faster than others. This paper presents a wavelet cross-correlation (WCC) algorithm that was applied to nonlinear analysis of ultrasonically guided waves (GW). By using this algorithm, harmonics present in the waveforms were extracted and nonlinearity parameters were used to indicate both the tip of the cracks and size of the plastic zone. B-scans performed with the quadratic nonlinearities were sensitive to micro-damage specific to plastic zones.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Sauzet, Odile; Peacock, Janet L
2017-07-20
The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
NASA Astrophysics Data System (ADS)
Fujita, Kazuhiko; Otomaru, Maki; Lopati, Paeniu; Hosono, Takashi; Kayanne, Hajime
2016-03-01
Carbonate production by large benthic foraminifers is sometimes comparable to that of corals and coralline algae, and contributes to sedimentation on reef islands and beaches in the tropical Pacific. Population dynamic data, such as population density and size structure (size-frequency distribution), are vital for an accurate estimation of shell production of foraminifers. However, previous production estimates in tropical environments were based on a limited sampling period with no consideration of seasonality. In addition, no comparisons were made of various estimation methods to determine more accurate estimates. Here we present the annual gross shell production rate of Baculogypsina sphaerulata, estimated based on population dynamics studied over a 2-yr period on an ocean reef flat of Funafuti Atoll (Tuvalu, tropical South Pacific). The population density of B. sphaerulata increased from January to March, when northwest winds predominated and the study site was on the leeward side of reef islands, compared to other seasons when southeast trade winds predominated and the study site was on the windward side. This result suggested that wind-driven flows controlled the population density at the study site. The B. sphaerulata population had a relatively stationary size-frequency distribution throughout the study period, indicating no definite intensive reproductive period in the tropical population. Four methods were applied to estimate the annual gross shell production rates of B. sphaerulata. The production rates estimated by three of the four methods (using monthly biomass, life tables and growth increment rates) were in the order of hundreds of g CaCO3 m-2 yr-1 or cm-3 m-2 yr-1, and the simple method using turnover rates overestimated the values. This study suggests that seasonal surveys should be undertaken of population density and size structure as these can produce more accurate estimates of shell productivity of large benthic foraminifers.
Lee, Christina D; Chae, Junghoon; Schap, TusaRebecca E; Kerr, Deborah A; Delp, Edward J; Ebert, David S; Boushey, Carol J
2012-03-01
Diet is a critical element of diabetes self-management. An emerging area of research is the use of images for dietary records using mobile telephones with embedded cameras. These tools are being designed to reduce user burden and to improve accuracy of portion-size estimation through automation. The objectives of this study were to (1) assess the error of automatically determined portion weights compared to known portion weights of foods and (2) to compare the error between automation and human. Adolescents (n = 15) captured images of their eating occasions over a 24 h period. All foods and beverages served were weighed. Adolescents self-reported portion sizes for one meal. Image analysis was used to estimate portion weights. Data analysis compared known weights, automated weights, and self-reported portions. For the 19 foods, the mean ratio of automated weight estimate to known weight ranged from 0.89 to 4.61, and 9 foods were within 0.80 to 1.20. The largest error was for lettuce and the most accurate was strawberry jam. The children were fairly accurate with portion estimates for two foods (sausage links, toast) using one type of estimation aid and two foods (sausage links, scrambled eggs) using another aid. The automated method was fairly accurate for two foods (sausage links, jam); however, the 95% confidence intervals for the automated estimates were consistently narrower than human estimates. The ability of humans to estimate portion sizes of foods remains a problem and a perceived burden. Errors in automated portion-size estimation can be systematically addressed while minimizing the burden on people. Future applications that take over the burden of these processes may translate to better diabetes self-management. © 2012 Diabetes Technology Society.
Genome size and chromosome number in velvet worms (Onychophora).
Jeffery, Nicholas W; Oliveira, Ivo S; Gregory, T Ryan; Rowell, David M; Mayer, Georg
2012-12-01
The Onychophora (velvet worms) represents a small group of invertebrates (~180 valid species), which is commonly united with Tardigrada and Arthropoda in a clade called Panarthropoda. As with the majority of invertebrate taxa, genome size data are very limited for the Onychophora, with only one previously published estimate. Here we use both flow cytometry and Feulgen image analysis densitometry to provide genome size estimates for seven species of velvet worms from both major subgroups, Peripatidae and Peripatopsidae, along with karyotype data for each species. Genome sizes in these species range from roughly 5-19 pg, with densitometric estimates being slightly larger than those obtained by flow cytometry for all species. Chromosome numbers range from 2n = 8 to 2n = 54. No relationship is evident between genome size, chromosome number, or reproductive mode. Various avenues for future genomic research are presented based on these results.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.
Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A
2018-02-01
We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.
Two- and three-dimensional CT measurements of urinary calculi length and width: a comparative study.
Lidén, Mats; Thunberg, Per; Broxvall, Mathias; Geijer, Håkan
2015-04-01
The standard imaging procedure for a patient presenting with renal colic is unenhanced computed tomography (CT). The CT measured size has a close correlation to the estimated prognosis for spontaneous passage of a ureteral calculus. Size estimations of urinary calculi in CT images are still based on two-dimensional (2D) reformats. To develop and validate a calculus oriented three-dimensional (3D) method for measuring the length and width of urinary calculi and to compare the calculus oriented measurements of the length and width with corresponding 2D measurements obtained in axial and coronal reformats. Fifty unenhanced CT examinations demonstrating urinary calculi were included. A 3D symmetric segmentation algorithm was validated against reader size estimations. The calculus oriented size from the segmentation was then compared to the estimated size in axial and coronal 2D reformats. The validation showed 0.1 ± 0.7 mm agreement against reference measure. There was a 0.4 mm median bias for 3D estimated calculus length compared to 2D (P < 0.001), but no significant bias for 3D width compared to 2D. The length of a calculus in axial and coronal reformats becomes underestimated compared to 3D if its orientation is not aligned to the image planes. Future studies aiming to correlate calculus size with patient outcome should use a calculus oriented size estimation. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Thoracic and respirable particle definitions for human health risk assessment.
Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman
2013-04-10
Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.
Thoracic and respirable particle definitions for human health risk assessment
2013-01-01
Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443
A comparison study of size-specific dose estimate calculation methods.
Parikh, Roshni A; Wien, Michael A; Novak, Ronald D; Jordan, David W; Klahr, Paul; Soriano, Stephanie; Ciancibello, Leslie; Berlin, Sheila C
2018-01-01
The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ c ) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI vol, there was poor correlation, ρ c <0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide acceptable dose estimates for pediatric patients <30 cm in body width. Body weight provides a quick and practical method to identify conversion factors that can be used to estimate SSDE with reasonable accuracy in pediatric patients with body width ≥20 cm.
Estimating the ratio of pond size to irrigated soybean land in Mississippi: a case study
Ying Ouyang; G. Feng; J. Read; T. D. Leininger; J. N. Jenkins
2016-01-01
Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of on-farm water storage pond size to irrigated crop land based on pond metric and its hydrogeological conditions. In this study, two simulation scenarios were chosen to...
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Pneumothorax size measurements on digital chest radiographs: Intra- and inter- rater reliability.
Thelle, Andreas; Gjerdevik, Miriam; Grydeland, Thomas; Skorge, Trude D; Wentzel-Larsen, Tore; Bakke, Per S
2015-10-01
Detailed and reliable methods may be important for discussions on the importance of pneumothorax size in clinical decision-making. Rhea's method is widely used to estimate pneumothorax size in percent based on chest X-rays (CXRs) from three measure points. Choi's addendum is used for anterioposterior projections. The aim of this study was to examine the intrarater and interrater reliability of the Rhea and Choi method using digital CXR in the ward based PACS monitors. Three physicians examined a retrospective series of 80 digital CXRs showing pneumothorax, using Rhea and Choi's method, then repeated in a random order two weeks later. We used the analysis of variance technique by Eliasziw et al. to assess the intrarater and interrater reliability in altogether 480 estimations of pneumothorax size. Estimated pneumothorax sizes ranged between 5% and 100%. The intrarater reliability coefficient was 0.98 (95% one-sided lower-limit confidence interval C 0.96), and the interrater reliability coefficient was 0.95 (95% one-sided lower-limit confidence interval 0.93). This study has shown that the Rhea and Choi method for calculating pneumothorax size has high intrarater and interrater reliability. These results are valid across gender, side of pneumothorax and whether the patient is diagnosed with primary or secondary pneumothorax. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Genome size of 14 species of fireflies (Insecta, Coleoptera, Lampyridae)
Liu, Gui-Chun; Dong, Zhi-Wei; He, Jin-Wu; Zhao, Ruo-Ping; Wang, Wen; Li, Xue-Yan
2017-01-01
Eukaryotic genome size data are important both as the basis for comparative research into genome evolution and as estimators of the cost and difficulty of genome sequencing programs for non-model organisms. In this study, the genome size of 14 species of fireflies (Lampyridae) (two genera in Lampyrinae, three genera in Luciolinae, and one genus in subfamily incertae sedis) were estimated by propidium iodide (PI)-based flow cytometry. The haploid genome sizes of Lampyridae ranged from 0. 42 to 1. 31 pg, a 3. 1-fold span. Genome sizes of the fireflies varied within the tested subfamilies and genera. Lamprigera and Pyrocoelia species had large and small genome sizes, respectively. No correlation was found between genome size and morphological traits such as body length, body width, eye width, and antennal length. Our data provide additional information on genome size estimation of the firefly family Lampyridae. Furthermore, this study will help clarify the cost and difficulty of genome sequencing programs for non-model organisms and will help promote studies on firefly genome evolution. PMID:29280364
NASA Astrophysics Data System (ADS)
Inoue, K.; Kataoka, H.; Nagai, Y.; Hasegawa, M.; Kobayashi, Y.
2013-10-01
Positron annihilation spectroscopy is employed to estimate the size of subnanometer-scale open spaces in insulating materials. In most cases, the size is estimated from the lifetime of long-lived ortho-positronium (o-Ps) by pickoff annihilation using a simplified model. However, reactions of Ps with surrounding electrons other than the pickoff reaction, such as spin conversion or chemical reaction, could give a substantially underestimated size using the simplified model. In the present paper, we report that the size of the open spaces can be evaluated correctly by the angular correlation of positron annihilation radiation (ACAR) with a magnetic field using the spin-polarization effect on Ps formation, even if such reactions of Ps occur in the material. This method is applied to the subnanometer-scale structural open spaces of silica-based glass doped with Fe. We demonstrate the influence of the Ps reaction on size-estimation of the open spaces from the o-Ps lifetime. Furthermore, the type of reaction, whether spin conversion or chemical, is distinguished from the magnetic field dependence of the Ps self-annihilation component intensity in the ACAR spectra. The Ps reaction in silica-based glass doped with Fe is a chemical reaction (most likely oxidation) rather than spin conversion, with Fe ions. The chemical quenching rate with Fe ions is determined from the dependence of the o-Ps lifetime on the Fe content.
On the role of modeling choices in estimation of cerebral aneurysm wall tension.
Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L
2012-11-15
To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.
Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas
2017-12-01
Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.
Shanmuga Doss, Sreeja; Bhatt, Nirav Pravinbhai; Jayaraman, Guhan
2017-08-15
There is an unreasonably high variation in the literature reports on molecular weight of hyaluronic acid (HA) estimated using conventional size exclusion chromatography (SEC). This variation is most likely due to errors in estimation. Working with commercially available HA molecular weight standards, this work examines the extent of error in molecular weight estimation due to two factors: use of non-HA based calibration and concentration of sample injected into the SEC column. We develop a multivariate regression correlation to correct for concentration effect. Our analysis showed that, SEC calibration based on non-HA standards like polyethylene oxide and pullulan led to approximately 2 and 10 times overestimation, respectively, when compared to HA-based calibration. Further, we found that injected sample concentration has an effect on molecular weight estimation. Even at 1g/l injected sample concentration, HA molecular weight standards of 0.7 and 1.64MDa showed appreciable underestimation of 11-24%. The multivariate correlation developed was found to reduce error in estimations at 1g/l to <4%. The correlation was also successfully applied to accurately estimate the molecular weight of HA produced by a recombinant Lactococcus lactis fermentation. Copyright © 2017 Elsevier B.V. All rights reserved.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Safarnejad, Ali; Nga, Nguyen Thien; Son, Vo Hai
2017-06-01
This study aims to estimate the number of men who have sex with men (MSM) in Ho Chi Minh City (HCMC) and Nghe An province, Viet Nam, using a novel method of population size estimation, and to assess the feasibility of the method in implementation. An innovative approach to population size estimation grounded on the principles of the multiplier method, and using social app technology and internet-based surveys was undertaken among MSM in two regions of Viet Nam in 2015. Enumeration of active users of popular social apps for MSM in Viet Nam was conducted over 4 weeks. Subsequently, an independent online survey was done using respondent driven sampling. We also conducted interviews with key informants in Nghe An and HCMC on their experience and perceptions of this method and other methods of size estimation. The population of MSM in Nghe An province was estimated to be 1765 [90% CI 1251-3150]. The population of MSM in HCMC was estimated to be 37,238 [90% CI 24,146-81,422]. These estimates correspond to 0.17% of the adult male population in Nghe An province [90% CI 0.12-0.30], and 1.35% of the adult male population in HCMC [90% CI 0.87-2.95]. Our size estimates of the MSM population (1.35% [90% CI 0.87%-2.95%] of the adult male population in HCMC) fall within current standard practice of estimating 1-3% of adult male population in big cities. Our size estimates of the MSM population (0.17% [90% CI 0.12-0.30] of the adult male population in Nghe An province) are lower than the current standard practice of estimating 0.5-1.5% of adult male population in rural provinces. These estimates can provide valuable information for sub-national level HIV prevention program planning and evaluation. Furthermore, we believe that our results help to improve application of this population size estimation method in other regions of Viet Nam.
Letelier, Ricardo M.; Whitmire, Amanda L.; Barone, Benedetto; Bidigare, Robert R.; Church, Matthew J.; Karl, David M.
2015-01-01
Abstract The particle size distribution (PSD) is a critical aspect of the oceanic ecosystem. Local variability in the PSD can be indicative of shifts in microbial community structure and reveal patterns in cell growth and loss. The PSD also plays a central role in particle export by influencing settling speed. Satellite‐based models of primary productivity (PP) often rely on aspects of photophysiology that are directly related to community size structure. In an effort to better understand how variability in particle size relates to PP in an oligotrophic ecosystem, we collected laser diffraction‐based depth profiles of the PSD and pigment‐based classifications of phytoplankton functional types (PFTs) on an approximately monthly basis at the Hawaii Ocean Time‐series Station ALOHA, in the North Pacific subtropical gyre. We found a relatively stable PSD in the upper water column. However, clear seasonality is apparent in the vertical distribution of distinct particle size classes. Neither laser diffraction‐based estimations of relative particle size nor pigment‐based PFTs was found to be significantly related to the rate of 14C‐based PP in the light‐saturated upper euphotic zone. This finding indicates that satellite retrievals of particle size, based on particle scattering or ocean color would not improve parameterizations of present‐day bio‐optical PP models for this region. However, at depths of 100–125 m where irradiance exerts strong control on PP, we do observe a significant linear relationship between PP and the estimated carbon content of 2–20 μm particles. PMID:27812434
White, Angelicque E; Letelier, Ricardo M; Whitmire, Amanda L; Barone, Benedetto; Bidigare, Robert R; Church, Matthew J; Karl, David M
2015-11-01
The particle size distribution (PSD) is a critical aspect of the oceanic ecosystem. Local variability in the PSD can be indicative of shifts in microbial community structure and reveal patterns in cell growth and loss. The PSD also plays a central role in particle export by influencing settling speed. Satellite-based models of primary productivity (PP) often rely on aspects of photophysiology that are directly related to community size structure. In an effort to better understand how variability in particle size relates to PP in an oligotrophic ecosystem, we collected laser diffraction-based depth profiles of the PSD and pigment-based classifications of phytoplankton functional types (PFTs) on an approximately monthly basis at the Hawaii Ocean Time-series Station ALOHA, in the North Pacific subtropical gyre. We found a relatively stable PSD in the upper water column. However, clear seasonality is apparent in the vertical distribution of distinct particle size classes. Neither laser diffraction-based estimations of relative particle size nor pigment-based PFTs was found to be significantly related to the rate of 14 C-based PP in the light-saturated upper euphotic zone. This finding indicates that satellite retrievals of particle size, based on particle scattering or ocean color would not improve parameterizations of present-day bio-optical PP models for this region. However, at depths of 100-125 m where irradiance exerts strong control on PP, we do observe a significant linear relationship between PP and the estimated carbon content of 2-20 μm particles.
Ali, Sajid; Soubeyrand, Samuel; Gladieux, Pierre; Giraud, Tatiana; Leconte, Marc; Gautier, Angélique; Mboup, Mamadou; Chen, Wanquan; de Vallavieille-Pope, Claude; Enjalbert, Jérôme
2016-07-01
Inferring reproductive and demographic parameters of populations is crucial to our understanding of species ecology and evolutionary potential but can be challenging, especially in partially clonal organisms. Here, we describe a new and accurate method, cloncase, for estimating both the rate of sexual vs. asexual reproduction and the effective population size, based on the frequency of clonemate resampling across generations. Simulations showed that our method provides reliable estimates of sex frequency and effective population size for a wide range of parameters. The cloncase method was applied to Puccinia striiformis f.sp. tritici, a fungal pathogen causing stripe/yellow rust, an important wheat disease. This fungus is highly clonal in Europe but has been suggested to recombine in Asia. Using two temporally spaced samples of P. striiformis f.sp. tritici in China, the estimated sex frequency was 75% (i.e. three-quarter of individuals being sexually derived during the yearly sexual cycle), indicating strong contribution of sexual reproduction to the life cycle of the pathogen in this area. The inferred effective population size of this partially clonal organism (Nc = 998) was in good agreement with estimates obtained using methods based on temporal variations in allelic frequencies. The cloncase estimator presented herein is the first method allowing accurate inference of both sex frequency and effective population size from population data without knowledge of recombination or mutation rates. cloncase can be applied to population genetic data from any organism with cyclical parthenogenesis and should in particular be very useful for improving our understanding of pest and microbial population biology. © 2016 John Wiley & Sons Ltd.
Dickson, David; Caivano, Domenico; Matos, Jose Novo; Summerfield, Nuala; Rishniw, Mark
2017-12-01
To provide reference intervals for 2-dimensional linear and area-based estimates of left atrial (LA) function in healthy dogs and to evaluate the ability of estimates of LA function to differentiate dogs with subclinical myxomatous mitral valve disease (MMVD) and similarly affected dogs with congestive heart failure (CHF). Fifty-two healthy adult dogs, 88 dogs with MMVD of varying severity. Linear and area measurements from 2-dimensional echocardiographs in both right parasternal long and short axis views optimized for the left atrium were used to derive estimates of LA active emptying fraction, passive emptying fraction, expansion index, and total fractional emptying. Differences for each estimate were compared between healthy and MMVD dogs (based on ACVIM classification), and between MMVD dogs with subclinical disease and CHF that had similar LA dimensions. Diagnostic utility at identifying CHF was examined for dogs with subclinical MMVD and CHF. Relationships with bodyweight were assessed. All estimates of LA function decreased with increasing ACVIM stage of mitral valve disease (p<0.05) and showed negative relationships with increasing LA size (all r 2 values < 0.2), except for LA passive emptying fraction, which did not differ or correlate with LA size (p=0.4). However, no index of LA function identified CHF better than measurements of LA size. Total LA fractional emptying and expansion index showed modest negative correlations with bodyweight. Estimates of LA function worsen with worsening MMVD but fail to discriminate dogs with CHF from those with subclinical MMVD any better than simple estimates of LA size. Copyright © 2017 Elsevier B.V. All rights reserved.
Intra-class correlation estimates for assessment of vitamin A intake in children.
Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D
2005-03-01
In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.
NASA Astrophysics Data System (ADS)
Pandithurai, G.; Takamura, T.; Yamaguchi, J.; Miyagi, K.; Takano, T.; Ishizaka, Y.; Dipu, S.; Shimizu, A.
2009-07-01
The effect of increased aerosol concentrations on the low-level, non-precipitating, ice-free stratus clouds is examined using a suite of surface-based remote sensing systems. Cloud droplet effective radius and liquid water path are retrieved using cloud radar and microwave radiometer. Collocated measurements of aerosol scattering coefficient, size distribution and cloud condensation nuclei (CCN) concentrations were used to examine the response of cloud droplet size and optical thickness to increased CCN proxies. During the episodic events of increase in aerosol accumulation-mode volume distribution, the decrease in droplet size and increase in cloud optical thickness is observed. The indirect effect estimates are made for both droplet effective radius and cloud optical thickness for different liquid water path ranges and they range 0.02-0.18 and 0.005-0.154, respectively. Data are also categorized into thin and thick clouds based on cloud geometric thickness (Δz) and estimates show IE values are relatively higher for thicker clouds.
Grimm, Annegret; Gruber, Bernd; Henle, Klaus
2014-01-01
Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK). If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2). Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to evaluate the performance of mark-recapture population size estimators under field conditions, which is essential for selecting an appropriate method and obtaining reliable results in ecology and conservation biology, and thus for sound management. PMID:24896260
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Rosenberg, Karen R; Zuné, Lü; Ruff, Christopher B
2006-03-07
The unusual discovery of associated cranial and postcranial elements from a single Middle Pleistocene fossil human allows us to calculate body proportions and relative cranial capacity (encephalization quotient) for that individual rather than rely on estimates based on sample means from unassociated specimens. The individual analyzed here (Jinniushan) from northeastern China at 260,000 years ago is the largest female specimen yet known in the human fossil record and has body proportions (body height relative to body breadth and relative limb length) typical of cold-adapted populations elsewhere in the world. Her encephalization quotient of 4.15 is similar to estimates for late Middle Pleistocene humans that are based on mean body size and mean brain size from unassociated specimens.
Schoenecker, Kathryn A.; Lubow, Bruce C.
2016-01-01
Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.
Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric
2012-12-01
As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.
Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.
2009-01-01
A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.
Altschuler, Justin; Margolius, David; Bodenheimer, Thomas; Grumbach, Kevin
2012-01-01
PURPOSE Primary care faces the dilemma of excessive patient panel sizes in an environment of a primary care physician shortage. We aimed to estimate primary care panel sizes under different models of task delegation to nonphysician members of the primary care team. METHODS We used published estimates of the time it takes for a primary care physician to provide preventive, chronic, and acute care for a panel of 2,500 patients, and modeled how panel sizes would change if portions of preventive and chronic care services were delegated to nonphysician team members. RESULTS Using 3 assumptions about the degree of task delegation that could be achieved (77%, 60%, and 50% of preventive care, and 47%, 30%, and 25% of chronic care), we estimated that a primary care team could reasonably care for a panel of 1,947, 1,523, or 1,387 patients. CONCLUSIONS If portions of preventive and chronic care services are delegated to nonphysician team members, primary care practices can provide recommended preventive and chronic care with panel sizes that are achievable with the available primary care workforce. PMID:22966102
Altschuler, Justin; Margolius, David; Bodenheimer, Thomas; Grumbach, Kevin
2012-01-01
PURPOSE Primary care faces the dilemma of excessive patient panel sizes in an environment of a primary care physician shortage. We aimed to estimate primary care panel sizes under different models of task delegation to nonphysician members of the primary care team. METHODS We used published estimates of the time it takes for a primary care physician to provide preventive, chronic, and acute care for a panel of 2,500 patients, and modeled how panel sizes would change if portions of preventive and chronic care services were delegated to nonphysician team members. RESULTS Using 3 assumptions about the degree of task delegation that could be achieved (77%, 60%, and 50% of preventive care, and 47%, 30%, and 25% of chronic care), we estimated that a primary care team could reasonably care for a panel of 1,947, 1,523, or 1,387 patients. CONCLUSIONS If portions of preventive and chronic care services are delegated to nonphysician team members, primary care practices can provide recommended preventive and chronic care with panel sizes that are achievable with the available primary care workforce.
Caetano, Raul; Mills, Britain A; Harris, T Robert
2012-01-01
This study was conducted to examine discrepancies in alcohol consumption estimates between a self-reported standard quantity-frequency measure and an adjusted version based on respondents' typically used container size. Using a multistage cluster sample design, 5,224 Hispanic individuals 18 years of age and older were selected from the household population in five metropolitan areas of the United States: Miami, New York, Philadelphia, Houston, and Los Angeles. The survey-weighted response rate was 76%. Personal interviews lasting an average of 1 hour were conducted in respondents' homes in either English or Spanish. The overall effect of container adjustment was to increase estimates of ethanol consumption by 68% for women (range across Hispanic groups: 17%-99%) and 30% for men (range: 14%-42%). With the exception of female Cuban American, Mexican American, and South/Central American beer drinkers and male Cuban American wine drinkers, all percentage differences between unadjusted and container-adjusted estimates were positive. Second, container adjustments produced the largest change for volume of distilled spirits, followed by wine and beer. Container size adjustments generally produced larger percentage increases in consumption estimates for the higher volume drinkers, especially the upper tertile of female drinkers. Self-reported alcohol consumption based on standard drinks underreports consumption when compared with reports based on the amount of alcohol poured into commonly used containers.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Liu, Jingxia; Colditz, Graham A
2018-05-01
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Improved gap size estimation for scaffolding algorithms.
Sahlin, Kristoffer; Street, Nathaniel; Lundeberg, Joakim; Arvestad, Lars
2012-09-01
One of the important steps of genome assembly is scaffolding, in which contigs are linked using information from read-pairs. Scaffolding provides estimates about the order, relative orientation and distance between contigs. We have found that contig distance estimates are generally strongly biased and based on false assumptions. Since erroneous distance estimates can mislead in subsequent analysis, it is important to provide unbiased estimation of contig distance. In this article, we show that state-of-the-art programs for scaffolding are using an incorrect model of gap size estimation. We discuss why current maximum likelihood estimators are biased and describe what different cases of bias we are facing. Furthermore, we provide a model for the distribution of reads that span a gap and derive the maximum likelihood equation for the gap length. We motivate why this estimate is sound and show empirically that it outperforms gap estimators in popular scaffolding programs. Our results have consequences both for scaffolding software, structural variation detection and for library insert-size estimation as is commonly performed by read aligners. A reference implementation is provided at https://github.com/SciLifeLab/gapest. Supplementary data are availible at Bioinformatics online.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
Minimum area requirements for an at-risk butterfly based on movement and demography.
Brown, Leone M; Crone, Elizabeth E
2016-02-01
Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.
The international food unit: a new measurement aid that can improve portion size estimation.
Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M
2017-09-12
Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.
Conducting a 3D Converted Shear Wave Project to Reduce Exploration Risk at Wister, CA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matlick, Skip; Walsh, Patrick; Rhodes, Greg
2015-06-30
Ormat sited 2 full-size exploration wells based on 3D seismic interpretation of fractures, prior drilling results, and temperature anomaly. The wells indicated commercial temperatures (>300 F), but almost no permeability, despite one of the wells being drilled within 820 ft of an older exploration well with reported indications of permeability. Following completion of the second well in 2012, Ormat undertook a lengthy program to 1) evaluate the lack of observed permeability, 2) estimate the likelihood of finding permeability with additional drilling, and 3) estimate resource size based on an anticipated extent of permeability.
75 FR 8955 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-26
... changes are a result of findings from the Pretest that showed high numbers of Spanish speakers at... annualized burden. The Pretest has already been conducted and the estimates of burden for the interview in the Main Study are based on results from the Pretest. Based on our new sample size estimates adjusted...
Estimation of Reineke and Volume-Based Maximum Size-Density Lines For Shortleaf Pine
Thomas B. Lynch; Robert F. Wittwer; Douglas J. Stevenson
2004-01-01
Maximum size-density relationships for Reineke's stand density index as well as for a relationship based on average tree volume were fitted to data from more than a decade of annual remeasurements of plots in unthinned naturally occurring shor tleaf pine in southeaster n Oklahoma. Reineke's stand density index is based on a maximum line of the form log(N) = a...
NASA Astrophysics Data System (ADS)
Tsutsumi, Morito; Seya, Hajime
2009-12-01
This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.
Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors
NASA Astrophysics Data System (ADS)
Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood
2000-08-01
Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Beaty, Lynne E; Salice, Christopher J
2013-10-01
Invasive species are costly and difficult to control. In order to gain a mechanistic understanding of potential control measures, individual-based models uniquely parameterized to reflect the salient life-history characteristics of invasive species are useful. Using invasive Australian Rhinella marina as a case study, we constructed a cohort- and individual-based population simulation that incorporates growth and body size of terrestrial stages. We used this allometric approach to examine the efficacy of nontraditional control methods (i.e., tadpole alarm chemicals and native meat ants) that may have indirect effects on population dynamics mediated by effects on body size. We compared population estimates resulting from these control methods with traditional hand removal. We also conducted a sensitivity analysis to investigate the effect that model parameters, specifically those associated with growth and body size, had on adult population estimates. Incremental increases in hand removal of adults and juveniles caused nonlinear decreases in adult population estimates, suggesting less return with increased investment in hand-removal efforts. Applying tadpole alarm chemicals or meat ants decreased adult population estimates on the same level as removing 15-25% of adults and juveniles by hand. The combined application of tadpole alarm chemicals and meat ants resulted in approximately 80% decrease in adult abundance, the largest of any applied control method. In further support of the nontraditional control methods, which greatly affected the metamorph stage, our model was most sensitive to changes in metamorph survival, juvenile survival, metamorph growth rate, and adult survival. Our results highlight the use and insights that can be gained from individual-based models that incorporate growth and body size and the potential success that nontraditional control methods could have in controlling established, invasive Rhinella marina populations.
Assessing relative abundance and reproductive success of shrubsteppe raptors
Lehman, Robert N.; Carpenter, L.B.; Steenhof, Karen; Kochert, Michael N.
1998-01-01
From 1991-1994, we quantified relative abundance and reproductive success of the Ferruginous Hawk (Buteo regalis), Northern Harrier (Circus cyaneus), Burrowing Owl (Speotytoc unicularia), and Short-eared Owl (Asio flammeus) on the shrubsteppe plateaus (benchlands) in and near the Snake River Birds of Prey National Conservation Area in southwestern Idaho. To assess relative abundance, we searched randomly selected plots using four sampling methods: point counts, line transects, and quadrats of two sizes. On a persampling-effort basis, transects were slightly more effective than point counts and quadrats for locating raptor nests (3.4 pairs detected/100 h of effort vs. 2.2-3.1 pairs). Random sampling using quadrats failed to detect a Short-eared Owl population increase from 1993 to 1994. To evaluate nesting success, we tried to determine reproductive outcome for all nesting attempts located during random, historical, and incidental nest searches. We compared nesting success estimates based on all nesting attempts, on attempts found during incubation, and the Mayfield model. Most pairs used to evaluate success were pairs found incidentally. Visits to historical nesting areas yielded the highest number of pairs per sampling effort (14.6/100 h), but reoccupancy rates for most species decreased through time. Estimates based on all attempts had the highest sample sizes but probably overestimated success for all species except the Ferruginous Hawk. Estimates of success based on nesting attempts found during incubation had the lowest sample sizes. All three methods yielded biased nesting snccess estimates for the Northern Harrier and Short-eared Owl. The estimate based on pairs found during incubation probably provided the least biased estimate for the Burrowing Owl. Assessments of nesting success were hindered by difficulties in confirming egg laying and nesting success for all species except the Ferruginous hawk.
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
ERIC Educational Resources Information Center
Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.
2016-01-01
Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with…
Comparison of methods for estimating the attributable risk in the context of survival analysis.
Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M
2017-01-23
The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
Energy Content Estimation by Collegians for Portion Standardized Foods Frequently Consumed in Korea
Kim, Jin; Lee, Hee Jung; Lee, Hyun Jung; Lee, Sun Ha; Yun, Jee-Young; Choi, Mi-Kyeong
2014-01-01
The purpose of this study is to estimate Korean collegians' knowledge of energy content in the standard portion size of foods frequently consumed in Korea and to investigate the differences in knowledge between gender groups. A total of 600 collegians participated in this study. Participants' knowledge was assessed based on their estimation on the energy content of 30 selected food items with their actual-size photo images. Standard portion size of food was based on 2010 Korean Dietary Reference Intakes, and the percentage of participants who accurately estimated (that is, within 20% of the true value) the energy content of the standard portion size was calculated for each food item. The food for which the most participants provided the accurate estimation was ramyun (instant noodles) (67.7%), followed by cooked rice (57.8%). The proportion of students who overestimated the energy content was highest for vegetables (68.8%) and beverages (68.1%). The proportion of students who underestimated the energy content was highest for grains and starches (42.0%) and fruits (37.1%). Female students were more likely to check energy content of foods that they consumed than male students. From these results, it was concluded that the knowledge on food energy content was poor among collegians, with some gender difference. Therefore, in the future, nutrition education programs should give greater attention to improving knowledge on calorie content and to helping them apply this knowledge in order to develop effective dietary plans. PMID:24527417
Energy content estimation by collegians for portion standardized foods frequently consumed in Korea.
Kim, Jin; Lee, Hee Jung; Lee, Hyun Jung; Lee, Sun Ha; Yun, Jee-Young; Choi, Mi-Kyeong; Kim, Mi-Hyun
2014-01-01
The purpose of this study is to estimate Korean collegians' knowledge of energy content in the standard portion size of foods frequently consumed in Korea and to investigate the differences in knowledge between gender groups. A total of 600 collegians participated in this study. Participants' knowledge was assessed based on their estimation on the energy content of 30 selected food items with their actual-size photo images. Standard portion size of food was based on 2010 Korean Dietary Reference Intakes, and the percentage of participants who accurately estimated (that is, within 20% of the true value) the energy content of the standard portion size was calculated for each food item. The food for which the most participants provided the accurate estimation was ramyun (instant noodles) (67.7%), followed by cooked rice (57.8%). The proportion of students who overestimated the energy content was highest for vegetables (68.8%) and beverages (68.1%). The proportion of students who underestimated the energy content was highest for grains and starches (42.0%) and fruits (37.1%). Female students were more likely to check energy content of foods that they consumed than male students. From these results, it was concluded that the knowledge on food energy content was poor among collegians, with some gender difference. Therefore, in the future, nutrition education programs should give greater attention to improving knowledge on calorie content and to helping them apply this knowledge in order to develop effective dietary plans.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Sobel Leonard, Ashley; Weissman, Daniel B; Greenbaum, Benjamin; Ghedin, Elodie; Koelle, Katia
2017-07-15
The bottleneck governing infectious disease transmission describes the size of the pathogen population transferred from the donor to the recipient host. Accurate quantification of the bottleneck size is particularly important for rapidly evolving pathogens such as influenza virus, as narrow bottlenecks reduce the amount of transferred viral genetic diversity and, thus, may decrease the rate of viral adaptation. Previous studies have estimated bottleneck sizes governing viral transmission by using statistical analyses of variants identified in pathogen sequencing data. These analyses, however, did not account for variant calling thresholds and stochastic viral replication dynamics within recipient hosts. Because these factors can skew bottleneck size estimates, we introduce a new method for inferring bottleneck sizes that accounts for these factors. Through the use of a simulated data set, we first show that our method, based on beta-binomial sampling, accurately recovers transmission bottleneck sizes, whereas other methods fail to do so. We then apply our method to a data set of influenza A virus (IAV) infections for which viral deep-sequencing data from transmission pairs are available. We find that the IAV transmission bottleneck size estimates in this study are highly variable across transmission pairs, while the mean bottleneck size of 196 virions is consistent with a previous estimate for this data set. Furthermore, regression analysis shows a positive association between estimated bottleneck size and donor infection severity, as measured by temperature. These results support findings from experimental transmission studies showing that bottleneck sizes across transmission events can be variable and influenced in part by epidemiological factors. IMPORTANCE The transmission bottleneck size describes the size of the pathogen population transferred from the donor to the recipient host and may affect the rate of pathogen adaptation within host populations. Recent advances in sequencing technology have enabled bottleneck size estimation from pathogen genetic data, although there is not yet a consistency in the statistical methods used. Here, we introduce a new approach to infer the bottleneck size that accounts for variant identification protocols and noise during pathogen replication. We show that failing to account for these factors leads to an underestimation of bottleneck sizes. We apply this method to an existing data set of human influenza virus infections, showing that transmission is governed by a loose, but highly variable, transmission bottleneck whose size is positively associated with the severity of infection of the donor. Beyond advancing our understanding of influenza virus transmission, we hope that this work will provide a standardized statistical approach for bottleneck size estimation for viral pathogens. Copyright © 2017 Sobel Leonard et al.
Calleja, Jesus Maria Garcia; Zhao, Jinkou; Reddy, Amala; Seguy, Nicole
2014-01-01
Problem Size estimates of key populations at higher risk of HIV exposure are recognized as critical for understanding the trajectory of the HIV epidemic and planning and monitoring an effective response, especially for countries with concentrated and low epidemics such as those in Asia. Context To help countries estimate population sizes of key populations, global guidelines were updated in 2011 to reflect new technical developments and recent field experiences in applying these methods. Action In September 2013, a meeting of programme managers and experts experienced with population size estimates (PSE) for key populations was held for 13 Asian countries. This article summarizes the key results presented, shares practical lessons learnt and reviews the methodological approaches from implementing PSE in 13 countries. Lessons learnt It is important to build capacity to collect, analyse and use PSE data; establish a technical review group; and implement a transparent, well documented process. Countries should adapt global PSE guidelines and maintain operational definitions that are more relevant and useable for country programmes. Development of methods for non-venue-based key populations requires more investment and collaborative efforts between countries and among partners. PMID:25320676
On soil textural classifications and soil-texture-based estimations
NASA Astrophysics Data System (ADS)
Ángel Martín, Miguel; Pachepsky, Yakov A.; García-Gutiérrez, Carlos; Reyes, Miguel
2018-02-01
The soil texture representation with the standard textural fraction triplet sand-silt-clay
is commonly used to estimate soil properties. The objective of this work was to test the hypothesis that other fraction sizes in the triplets may provide a better representation of soil texture for estimating some soil parameters. We estimated the cumulative particle size distribution and bulk density from an entropy-based representation of the textural triplet with experimental data for 6240 soil samples. The results supported the hypothesis. For example, simulated distributions were not significantly different from the original ones in 25 and 85 % of cases when the sand-silt-clay and very coarse+coarse + medium sand - fine + very fine sand - silt+clay
were used, respectively. When the same standard and modified triplets were used to estimate the average bulk density, the coefficients of determination were 0.001 and 0.967, respectively. Overall, the textural triplet selection appears to be application and data specific.
Warren, L.P.; Church, P.E.; Turtora, Michael
1996-01-01
Hydraulic conductivities of a sand and gravel aquifer were estimated by three methods: constant- head multiport-permeameter tests, grain-size analyses (with the Hazen approximation method), and slug tests. Sediment cores from 45 boreholes were undivided or divided into two or three vertical sections to estimate hydraulic conductivity based on permeameter tests and grain-size analyses. The cores were collected from depth intervals in the screened zone of the aquifer in each observation well. Slug tests were performed on 29 observation wells installed in the boreholes. Hydraulic conductivities of 35 sediment cores estimated by use of permeameter tests ranged from 0.9 to 86 meters per day, with a mean of 22.8 meters per day. Hydraulic conductivities of 45 sediment cores estimated by use of grain-size analyses ranged from 0.5 to 206 meters per day, with a mean of 40.7 meters per day. Hydraulic conductivities of aquifer material at 29 observation wells estimated by use of slug tests ranged from 0.6 to 79 meters per day, with a mean of 32.9 meters per day. The repeatability of estimated hydraulic conductivities were estimated to be within 30 percent for the permeameter method, 12 percent for the grain-size method, and 9.5 percent for the slug test method. Statistical tests determined that the medians of estimates resulting from the slug tests and grain-size analyses were not significantly different but were significantly higher than the median of estimates resulting from the permeameter tests. Because the permeameter test is the only method considered which estimates vertical hydraulic conductivity, the difference in estimates may be attributed to vertical or horizontal anisotropy. The difference in the average hydraulic conductivities estimated by use of each method was less than 55 percent when compared to the estimated hydraulic conductivity determined from an aquifer test conducted near the study area.
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
Effective population size of korean populations.
Park, Leeyoung
2014-12-01
Recently, new methods have been developed for estimating the current and recent changes in effective population sizes. Based on the methods, the effective population sizes of Korean populations were estimated using data from the Korean Association Resource (KARE) project. The overall changes in the population sizes of the total populations were similar to CHB (Han Chinese in Beijing, China) and JPT (Japanese in Tokyo, Japan) of the HapMap project. There were no differences in past changes in population sizes with a comparison between an urban area and a rural area. Age-dependent current and recent effective population sizes represent the modern history of Korean populations, including the effects of World War II, the Korean War, and urbanization. The oldest age group showed that the population growth of Koreans had already been substantial at least since the end of the 19th century.
Estimation of methacrylate monolith binding capacity from pressure drop data.
Podgornik, Aleš; Smrekar, Vida; Krajnc, Peter; Strancar, Aleš
2013-01-11
Convective chromatographic media comprising of membranes and monoliths represent an important group of chromatographic supports due to their flow-unaffected chromatographic properties and consequently fast separation and purification even of large biological macromolecules. Consisting of a single piece of material, common characterization procedures based on analysis of a small sample assuming to be representative for the entire batch, cannot be applied. Because of that, non-invasive characterization methods are preferred. In this work pressure drop was investigated for an estimation of dynamic binding capacity (DBC) of proteins and plasmid DNA for monoliths with different pore sizes. It was demonstrated that methacrylate monolith surface area is reciprocally proportional to pore diameter and that pressure drop on monolith is reciprocally proportional to square pore size demonstrating that methacrylate monolith microstructure is preserved by changing pore size. Based on these facts mathematical formalism has been derived predicting that DBC is in linear correlation with the square root of pressure drop. This was experimentally confirmed for ion-exchange and hydrophobic interactions for proteins and plasmid DNA. Furthermore, pressure drop was also applied for an estimation of DBC in grafted layers of different thicknesses as estimated from the pressure drop data. It was demonstrated that the capacity is proportional to the estimated grafted layer thickness. Copyright © 2012 Elsevier B.V. All rights reserved.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Accuracy of Standing-Tree Volume Estimates Based on McClure Mirror Caliper Measurements
Noel D. Cost
1971-01-01
The accuracy of standing-tree volume estimates, calculated from diameter measurements taken by a mirror caliper and with sectional aluminum poles for height control, was compared with volume estimates calculated from felled-tree measurements. Twenty-five trees which varied in species, size, and form were used in the test. The results showed that two estimates of total...
Post-stratified estimation of forest area and growing stock volume using lidar-based stratifications
Ronald E. McRoberts; Terje Gobakken; Erik Næsset
2012-01-01
National forest inventories report estimates of parameters related to forest area and growing stock volume for geographic areas ranging in size from municipalities to entire countries. Landsat imagery has been shown to be a source of auxiliary information that can be used with stratified estimation to increase the precision of estimates, although the increase is...
ERIC Educational Resources Information Center
Lafferty, Mark T.
2010-01-01
The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…
Gear fatigue crack prognosis using embedded model, gear dynamic model and fracture mechanics
NASA Astrophysics Data System (ADS)
Li, C. James; Lee, Hyungdae
2005-07-01
This paper presents a model-based method that predicts remaining useful life of a gear with a fatigue crack. The method consists of an embedded model to identify gear meshing stiffness from measured gear torsional vibration, an inverse method to estimate crack size from the estimated meshing stiffness; a gear dynamic model to simulate gear meshing dynamics and determine the dynamic load on the cracked tooth; and a fast crack propagation model to forecast the remaining useful life based on the estimated crack size and dynamic load. The fast crack propagation model was established to avoid repeated calculations of FEM and facilitate field deployment of the proposed method. Experimental studies were conducted to validate and demonstrate the feasibility of the proposed method for prognosis of a cracked gear.
Development of a Frequency-based Measure of Syntactic Difficulty for Estimating Readability.
ERIC Educational Resources Information Center
Selden, Ramsay
Readability estimates are usually based on measures of word difficulty and measures of sentence difficulty. Word difficulty is measured in two ways: by the structural size and complexity of words or by reference to phonomena of language use, such as word-list frequency or the regularity of spelling patterns. Sentence difficulty is measured only in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, K; Bostani, M; Cagnon, C
Purpose: AAPM Task Group 204 described size specific dose estimates (SSDE) for body scans. The purpose of this work is to use a similar approach to develop patient-specific, scanner-independent organ dose estimates for head CT exams using an attenuation-based size metric. Methods: For eight patient models from the GSF family of voxelized phantoms, dose to brain and lens of the eye was estimated using Monte Carlo simulations of contiguous axial scans for 64-slice MDCT scanners from four major manufacturers. Organ doses were normalized by scannerspecific 16 cm CTDIvol values and averaged across all scanners to obtain scanner-independent CTDIvol-to-organ-dose conversion coefficientsmore » for each patient model. Head size was measured at the first slice superior to the eyes; patient perimeter and effective diameter (ED) were measured directly from the GSF data. Because the GSF models use organ identification codes instead of Hounsfield units, water equivalent diameter (WED) was estimated indirectly. Using the image data from 42 patients ranging from 2 weeks old to adult, the perimeter, ED and WED size metrics were obtained and correlations between each metric were established. Applying these correlations to the GSF perimeter and ED measurements, WED was calculated for each model. The relationship between the various patient size metrics and CTDIvol-to-organ-dose conversion coefficients was then described. Results: The analysis of patient images demonstrated the correlation between WED and ED across a wide range of patient sizes. When applied to the GSF patient models, an exponential relationship between CTDIvol-to-organ-dose conversion coefficients and the WED size metric was observed with correlation coefficients of 0.93 and 0.77 for the brain and lens of the eye, respectively. Conclusion: Strong correlation exists between CTDIvol normalized brain dose and WED. For the lens of the eye, a lower correlation is observed, primarily due to surface dose variations. Funding Support: Siemens-UCLA Radiology Master Research Agreement; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski.« less
Labra, Fabio A; Hernández-Miranda, Eduardo; Quiñones, Renato A
2015-01-01
We study the temporal variation in the empirical relationships among body size (S), species richness (R), and abundance (A) in a shallow marine epibenthic faunal community in Coliumo Bay, Chile. We also extend previous analyses by calculating individual energy use (E) and test whether its bivariate and trivariate relationships with S and R are in agreement with expectations derived from the energetic equivalence rule. Carnivorous and scavenger species representing over 95% of sample abundance and biomass were studied. For each individual, body size (g) was measured and E was estimated following published allometric relationships. Data for each sample were tabulated into exponential body size bins, comparing species-averaged values with individual-based estimates which allow species to potentially occupy multiple size classes. For individual-based data, both the number of individuals and species across body size classes are fit by a Weibull function rather than by a power law scaling. Species richness is also a power law of the number of individuals. Energy use shows a piecewise scaling relationship with body size, with energetic equivalence holding true only for size classes above the modal abundance class. Species-based data showed either weak linear or no significant patterns, likely due to the decrease in the number of data points across body size classes. Hence, for individual-based size spectra, the SRA relationship seems to be general despite seasonal forcing and strong disturbances in Coliumo Bay. The unimodal abundance distribution results in a piecewise energy scaling relationship, with small individuals showing a positive scaling and large individuals showing energetic equivalence. Hence, strict energetic equivalence should not be expected for unimodal abundance distributions. On the other hand, while species-based data do not show unimodal SRA relationships, energy use across body size classes did not show significant trends, supporting energetic equivalence. PMID:25691966
Mills, Britain A.; Harris, T. Robert
2012-01-01
Objective: This study was conducted to examine discrepancies in alcohol consumption estimates between a self-reported standard quantity—frequency measure and an adjusted version based on respondents’ typically used container size. Method: Using a multistage cluster sample design, 5,224 Hispanic individuals 18 years of age and older were selected from the household population in five metropolitan areas of the United States: Miami, New York, Philadelphia, Houston, and Los Angeles. The survey-weighted response rate was 76%. Personal interviews lasting an average of 1 hour were conducted in respondents’ homes in either English or Spanish. Results: The overall effect of container adjustment was to increase estimates of ethanol consumption by 68% for women (range across Hispanic groups: 17%–99%) and 30% for men (range: 14%–42%). With the exception of female Cuban American, Mexican American, and South/Central American beer drinkers and male Cuban American wine drinkers, all percentage differences between unadjusted and container-adjusted estimates were positive. Second, container adjustments produced the largest change for volume of distilled spirits, followed by wine and beer. Container size adjustments generally produced larger percentage increases in consumption estimates for the higher volume drinkers, especially the upper tertile of female drinkers. Conclusions: Self-reported alcohol consumption based on standard drinks underreports consumption when compared with reports based on the amount of alcohol poured into commonly used containers. PMID:22152669
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Angular-domain scattering interferometry.
Shipp, Dustin W; Qian, Ruobing; Berger, Andrew J
2013-11-15
We present an angular-scattering optical method that is capable of measuring the mean size of scatterers in static ensembles within a field of view less than 20 μm in diameter. Using interferometry, the method overcomes the inability of intensity-based models to tolerate the large speckle grains associated with such small illumination areas. By first estimating each scatterer's location, the method can model between-scatterer interference as well as traditional single-particle Mie scattering. Direct angular-domain measurements provide finer angular resolution than digitally transformed image-plane recordings. This increases sensitivity to size-dependent scattering features, enabling more robust size estimates. The sensitivity of these angular-scattering measurements to various sizes of polystyrene beads is demonstrated. Interferometry also allows recovery of the full complex scattered field, including a size-dependent phase profile in the angular-scattering pattern.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Dropout Rates in Texas School Districts: Influences of School Size and Ethnic Group.
ERIC Educational Resources Information Center
Toenjes, Laurence A.
Longitudinal dropout rates (LDR's) for public school students and LDR's of pupil membership by ethnic group based on two Texas Education Agency reports are estimated. LDR's are calculated for the state, by school district size, for the 21 largest districts, and by average high school size. Findings dispel the prevalent perception of the dropout…
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Population entropies estimates of proteins
NASA Astrophysics Data System (ADS)
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
NASA Astrophysics Data System (ADS)
Solanki, Rekha Garg; Rajaram, Poolla; Bajpai, P. K.
2018-05-01
This work is based on the growth, characterization and estimation of lattice strain and crystallite size in CdS nanoparticles by X-ray peak profile analysis. The CdS nanoparticles were synthesized by a non-aqueous solvothermal method and were characterized by powder X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman and UV-visible spectroscopy. XRD confirms that the CdS nanoparticles have the hexagonal structure. The Williamson-Hall (W-H) method was used to study the X-ray peak profile analysis. The strain-size plot (SSP) was used to study the individual contributions of crystallite size and lattice strain from the X-rays peaks. The physical parameters such as strain, stress and energy density values were calculated using various models namely, isotropic strain model, anisotropic strain model and uniform deformation energy density model. The particle size was estimated from the TEM images to be in the range of 20-40 nm. The Raman spectrum shows the characteristic optical 1LO and 2LO vibrational modes of CdS. UV-visible absorption studies show that the band gap of the CdS nanoparticles is 2.48 eV. The results show that the crystallite size estimated from Scherrer's formula, W-H plots, SSP and the particle size calculated by TEM images are approximately similar.
SU-F-207-16: CT Protocols Optimization Using Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, H; Fan, J; Kupinski, M
2015-06-15
Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less
Reum, Jonathan C P; Jennings, Simon; Hunsicker, Mary E
2015-11-01
Nitrogen stable isotope ratios (δ(15) N) may be used to estimate community-level relationships between trophic level (TL) and body size in size-structured food webs and hence the mean predator to prey body mass ratio (PPMR). In turn, PPMR is used to estimate mean food chain length, trophic transfer efficiency and rates of change in abundance with body mass (usually reported as slopes of size spectra) and to calibrate and validate food web models. When estimating TL, researchers had assumed that fractionation of δ(15) N (Δδ(15) N) did not change with TL. However, a recent meta-analysis indicated that this assumption was not as well supported by data as the assumption that Δδ(15) N scales negatively with the δ(15) N of prey. We collated existing fish community δ(15) N-body size data for the Northeast Atlantic and tropical Western Arabian Sea with new data from the Northeast Pacific. These data were used to estimate TL-body mass relationships and PPMR under constant and scaled Δδ(15) N assumptions, and to assess how the scaled Δδ(15) N assumption affects our understanding of the structure of these food webs. Adoption of the scaled Δδ(15) N approach markedly reduces the previously reported differences in TL at body mass among fish communities from different regions. With scaled Δδ(15) N, TL-body mass relationships became more positive and PPMR fell. Results implied that realized prey size in these size-structured fish communities are less variable than previously assumed and food chains potentially longer. The adoption of generic PPMR estimates for calibration and validation of size-based fish community models is better supported than hitherto assumed, but predicted slopes of community size spectra are more sensitive to a given change or error in realized PPMR when PPMR is small. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Estimation of homogeneous nucleation flux via a kinetic model
NASA Technical Reports Server (NTRS)
Wilcox, C. F.; Bauer, S. H.
1991-01-01
The proposed kinetic model for condensation under homogeneous conditions, and the onset of unidirectional cluster growth in supersaturated gases, does not suffer from the conceptual flaws that characterize classical nucleation theory. When a full set of simultaneous rate equation is solved, a characteristic time emerges, for each cluster size, at which the production rate, and its rate of conversion to the next size (n + 1) are equal. Procedures for estimating the essential parameters are proposed; condensation fluxes J(kin) exp ss are evaluated. Since there are practical limits to the cluster size that can be incorporated in the set of simultaneous first-order differential equations, a code was developed for computing an approximate J(th) exp ss based on estimates of a 'constrained equilibrium' distribution, and identification of its minimum.
Problems with sampling desert tortoises: A simulation analysis based on field data
Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.
2005-01-01
The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.
Can blind persons accurately assess body size from the voice?
Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-04-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).
Can blind persons accurately assess body size from the voice?
Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-01-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20–65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264
Kery, M.; Gregg, K.B.
2003-01-01
1. Most plant demographic studies follow marked individuals in permanent plots. Plots tend to be small, so detectability is assumed to be one for every individual. However, detectability could be affected by factors such as plant traits, time, space, observer, previous detection, biotic interactions, and especially by life-state. 2. We used a double-observer survey and closed population capture-recapture modelling to estimate state-specific detectability of the orchid Cleistes bifaria in a long-term study plot of 41.2 m2. Based on AICc model selection, detectability was different for each life-state and for tagged vs. previously untagged plants. There were no differences in detectability between the two observers. 3. Detectability estimates (SE) for one-leaf vegetative, two-leaf vegetative, and flowering/fruiting states correlated with mean size of these states and were 0.76 (0.05), 0.92 (0.06), and 1 (0.00), respectively, for previously tagged plants, and 0.84 (0.08), 0.75 (0.22), and 0 (0.00), respectively, for previously untagged plants. (We had insufficient data to obtain a satisfactory estimate of previously untagged flowering plants). 4. Our estimates are for a medium-sized plant in a small and intensively surveyed plot. It is possible that detectability is even lower for larger plots and smaller plants or smaller life-states (e.g. seedlings) and that detectabilities < 1 are widespread in plant demographic studies. 5. State-dependent detectabilities are especially worrying since they will lead to a size- or state-biased sample from the study plot. Failure to incorporate detectability into demographic estimation methods introduces a bias into most estimates of population parameters such as fecundity, recruitment, mortality, and transition rates between life-states. We illustrate this by a simple example using a matrix model, where a hypothetical population was stable but, due to imperfect detection, wrongly projected to be declining at a rate of 8% per year. 6. Almost all plant demographic studies are based on models for discrete states. State and size are important predictors both for demographic rates and detectability. We suggest that even in studies based on small plots, state- or size-specific detectability should be estimated at least at some point to avoid biased inference about the dynamics of the population sampled.
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Estimating Radiation Dose Metrics for Patients Undergoing Tube Current Modulation CT Scans
NASA Astrophysics Data System (ADS)
McMillan, Kyle Lorin
Computed tomography (CT) has long been a powerful tool in the diagnosis of disease, identification of tumors and guidance of interventional procedures. With CT examinations comes the concern of radiation exposure and the associated risks. In order to properly understand those risks on a patient-specific level, organ dose must be quantified for each CT scan. Some of the most widely used organ dose estimates are derived from fixed tube current (FTC) scans of a standard sized idealized patient model. However, in current clinical practice, patient size varies from neonates weighing just a few kg to morbidly obese patients weighing over 200 kg, and nearly all CT exams are performed with tube current modulation (TCM), a scanning technique that adjusts scanner output according to changes in patient attenuation. Methods to account for TCM in CT organ dose estimates have been previously demonstrated, but these methods are limited in scope and/or restricted to idealized TCM profiles that are not based on physical observations and not scanner specific (e.g. don't account for tube limits, scanner-specific effects, etc.). The goal of this work was to develop methods to estimate organ doses to patients undergoing CT scans that take into account both the patient size as well as the effects of TCM. This work started with the development and validation of methods to estimate scanner-specific TCM schemes for any voxelized patient model. An approach was developed to generate estimated TCM schemes that match actual TCM schemes that would have been acquired on the scanner for any patient model. Using this approach, TCM schemes were then generated for a variety of body CT protocols for a set of reference voxelized phantoms for which TCM information does not currently exist. These are whole body patient models representing a variety of sizes, ages and genders that have all radiosensitive organs identified. TCM schemes for these models facilitated Monte Carlo-based estimates of fully-, partially- and indirectly-irradiated organ dose from TCM CT exams. By accounting for the effects of patient size in the organ dose estimates, a comprehensive set of patient-specific dose estimates from TCM CT exams was developed. These patient-specific organ dose estimates from TCM CT exams will provide a more complete understanding of the dose impact and risks associated with modern body CT scanning protocols.
NASA Astrophysics Data System (ADS)
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.
Estimating population trends with a linear model
Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization
Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun
2015-01-01
The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Woie, Leik; Måløy, Frode; Eftestøl, Trygve; Engan, Kjersti; Edvardsen, Thor; Kvaløy, Jan Terje; Ørn, Stein
2014-02-01
Current methods for the estimation of infarct size by late-enhanced cardiac magnetic imaging are based upon 2D analysis that first determines the size of the infarction in each slice, and thereafter adds the infarct sizes from each slice to generate a volume. We present a novel, automatic 3D method that estimates infarct size by a simultaneous analysis of all pixels from all slices. In a population of 54 patients with ischemic scars, the infarct size estimated by the automatic 3D method was compared with four established 2D methods. The new 3D method defined scar as the sum of all pixels with signal intensity (SI) ≥35 % of max SI from the complete myocardium, border zone: SI 35-50 % of max SI and core as SI ≥50 % of max SI. The 3D method yielded smaller infarct size (-2.8 ± 2.3 %) and core size (-3.0 ± 1.7 %) than the 2D method most similar to ours. There was no difference in the size of the border zone (0.2 ± 1.4 %). The 3D method demonstrated stronger correlations between scar size and left ventricular (LV) remodelling parameters (LV ejection fraction: r = -0.71, p < 0.0005, LV end-diastolic index: r = 0.54, p < 0.0005, and LV end-systolic index: r = 0.59, p < 0.0005) compared with conventional 2D methods. Infarct size estimation by our novel 3D automatic method is without the need for manual demarcation of the scar; it is less time-consuming and has a stronger correlation with remodelling parameters compared with existing methods.
Measurement of Flaw Size From Thermographic Data
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.
2015-01-01
Simple methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the delamination, since the heat diffuses in the plane parallel to the surface. The result is a temperature profile over the delamination which is larger than the delamination size. A variational approach is presented for reducing the thermographic data to produce an estimated size for a flaw that is much closer to the true size of the delamination. The method is based on an estimate for the thermal response that is a convolution of a Gaussian kernel with the shape of the flaw. The size is determined from both the temporal and spatial thermal response of the exterior surface above the delamination and constraints on the length of the contour surrounding the delamination. Examples of the application of the technique to simulation and experimental data are presented to investigate the limitations of the technique.
Users guide for STHARVEST: software to estimate the cost of harvesting small timber.
Roger D. Fight; Xiaoshan Zhang; Bruce R. Hartsough
2003-01-01
The STHARVEST computer application is Windows-based, public-domain software used to estimate costs for harvesting small-diameter stands or the small-diameter component of a mixed-sized stand. The equipment production rates were developed from existing studies. Equipment operating cost rates were based on November 1998 prices for new equipment and wage rates for the...
Fridman, M; Hodgkins, P S; Kahle, J S; Erder, M H
2015-06-01
There are few approved therapies for adults with attention-deficit/hyperactivity disorder (ADHD) in Europe. Lisdexamfetamine (LDX) is an effective treatment for ADHD; however, no clinical trials examining the efficacy of LDX specifically in European adults have been conducted. Therefore, to estimate the efficacy of LDX in European adults we performed a meta-regression of existing clinical data. A systematic review identified US- and Europe-based randomized efficacy trials of LDX, atomoxetine (ATX), or osmotic-release oral system methylphenidate (OROS-MPH) in children/adolescents and adults. A meta-regression model was then fitted to the published/calculated effect sizes (Cohen's d) using medication, geographical location, and age group as predictors. The LDX effect size in European adults was extrapolated from the fitted model. Sensitivity analyses performed included using adult-only studies and adding studies with placebo designs other than a standard pill-placebo design. Twenty-two of 2832 identified articles met inclusion criteria. The model-estimated effect size of LDX for European adults was 1.070 (95% confidence interval: 0.738, 1.401), larger than the 0.8 threshold for large effect sizes. The overall model fit was adequate (80%) and stable in the sensitivity analyses. This model predicts that LDX may have a large treatment effect size in European adults with ADHD. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Wang, Lutao; Xiao, Jun; Chai, Hua
2015-08-01
The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.
Finch, Warren Irvin; McCammon, Richard B.
1987-01-01
Based on the Memorandum of Understanding {MOU) of September 20, 1984, between the U.S. Geological Survey of the U.S. Department of Interior and the Energy Information Administration {EIA) of the U.S. Department of Energy {DOE), the U.S. Geological Survey began to make estimates of the undiscovered uranium endowment of selected areas of the United States in 1985. A modified NURE {National Uranium Resource Evaluation) method will be used in place of the standard NURE method of the DOE that was used for the national assessment reported in October 1980. The modified method, here named the 'deposit-size-frequency' {DSF) method, is presented for the first time, and calculations by the two methods are compared using an illustrative example based on preliminary estimates for the first area to be evaluated under the MOU. The results demonstrate that the estimate of the endowment using the DSF method is significantly larger and more uncertain than the estimate obtained by the NURE method. We believe that the DSF method produces a more realistic estimate because the principal factor estimated in the endowment equation is disaggregated into more parts and is more closely tied to specific geologic knowledge than by the NURE method. The DSF method consists of modifying the standard NURE estimation equation, U=AxFxTxG, by replacing the factors FxT by a single factor that represents the tonnage for the total number of deposits in all size classes. Use of the DSF method requires that the size frequency of deposits in a known or control area has been established and that the relation of the size-frequency distribution of deposits to probable controlling geologic factors has been determined. Using these relations, the principal scientist {PS) first estimates the number and range of size classes and then, for each size class, estimates the lower limit, most likely value, and upper limit of the numbers of deposits in the favorable area. Once these probable estimates have been refined by elicitation of the PS, they are entered into the DSF equation, and the probability distribution of estimates of undiscovered uranium endowment is calculated using a slight modification of the program by Ford and McLaren (1980). The EIA study of the viability of the domestic uranium industry requires an annual appraisal of the U.S. uranium resource situation. During DOE's NURE Program, which was terminated in 1983, a thorough assessment of the Nation's resources was completed. A comprehensive reevaluation of uranium resource base for the entire United States is not possible for each annual appraisal. A few areas are in need of future study, however, because of new developments in either scientific knowledge, industry exploration, or both. Four geologic environments have been selected for study by the U.S. Geological Survey in the next several years: (1) surficial uranium deposits throughout the conterminous United States, (2) uranium in collapse-breccia pipes in the Grand Canyon region of Arizona, (3) uranium in Tertiary sedimentary rocks of the Northern Great Plains, and (4) uranium in metamorphic rocks of the Piedmont province in the eastern States. In addition to participation in the National uranium resource assessment, the U.S. Geological Survey will take part in activities of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development and those of the International Atomic Energy Agency.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Dong, Jing; Tearney, Guillermo J.; Pitris, Costas
2018-02-01
Catheter-based Optical Coherence Tomography (OCT) devices allow real-time and comprehensive imaging of the human esophagus. Hence, they provide the potential to overcome some of the limitations of endoscopy and biopsy, allowing earlier diagnosis and better prognosis for esophageal adenocarcinoma patients. However, the large number of images produced during every scan makes manual evaluation of the data exceedingly difficult. In this study, we propose a fully automated tissue characterization algorithm, capable of discriminating normal tissue from Barrett's Esophagus (BE) and dysplasia through entire three-dimensional (3D) data sets, acquired in vivo. The method is based on both the estimation of the scatterer size of the esophageal epithelial cells, using the bandwidth of the correlation of the derivative (COD) method, as well as intensity-based characteristics. The COD method can effectively estimate the scatterer size of the esophageal epithelium cells in good agreement with the literature. As expected, both the mean scatterer size and its standard deviation increase with increasing severity of disease (i.e. from normal to BE to dysplasia). The differences in the distribution of scatterer size for each tissue type are statistically significant, with a p value of < 0.0001. However, the scatterer size by itself cannot be used to accurately classify the various tissues. With the addition of intensity-based statistics the correct classification rates for all three tissue types range from 83 to 100% depending on the lesion size.
A Comparative Study on Carbohydrate Estimation: GoCARB vs. Dietitians.
Vasiloglou, Maria F; Mougiakakou, Stavroula; Aubry, Emilie; Bokelmann, Anika; Fricker, Rita; Gomes, Filomena; Guntermann, Cathrin; Meyer, Alexa; Studerus, Diana; Stanga, Zeno
2018-06-07
GoCARB is a computer vision-based smartphone system designed for individuals with Type 1 Diabetes to estimate plated meals' carbohydrate (CHO) content. We aimed to compare the accuracy of GoCARB in estimating CHO with the estimations of six experienced dietitians. GoCARB was used to estimate the CHO content of 54 Central European plated meals, with each of them containing three different weighed food items. Ground truth was calculated using the USDA food composition database. Dietitians were asked to visually estimate the CHO content based on meal photographs. GoCARB and dietitians achieved comparable accuracies. The mean absolute error of the dietitians was 14.9 (SD 10.12) g of CHO versus 14.8 (SD 9.73) g of CHO for the GoCARB ( p = 0.93). No differences were found between the estimations of dietitians and GoCARB, regardless the meal size. The larger the size of the meal, the greater were the estimation errors made by both. Moreover, the higher the CHO content of a food category was, the more challenging its accurate estimation. GoCARB had difficulty in estimating rice, pasta, potatoes, and mashed potatoes, while dietitians had problems with pasta, chips, rice, and polenta. GoCARB may offer diabetic patients the option of an easy, accurate, and almost real-time estimation of the CHO content of plated meals, and thus enhance diabetes self-management.
Linkage map of the honey bee, Apis mellifera, based on RAPD markers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, G.J.; Page, R.E. Jr.
A linkage map was constructed for the honey bee based on the segregation of 365 random amplified polymorphic DNA (RAPD) markers in haploid male progeny of a single female bee. The X locus for sex determination and genes for black body color and malate dehydrogenase were mapped to separate linkage groups. RAPD markers were very efficient for mapping, with an average of about 2.8 loci mapped for each 10-nucleotide primer that was used in polymerase chain reactions. The mean interval size between markers on the map was 9.1 cM. The map covered 3110 cM of linked markers on 26 linkagemore » groups. We estimate the total genome size to be {approximately}3450 cM. The size of the map indicated a very high recombination rate for the honey bee. The relationship of physical to genetic distance was estimated at 52 kb/cM, suggesting that map-based cloning of genes will be feasible for this species. 71 refs., 6 figs., 1 tab.« less
State-space modeling of population sizes and trends in Nihoa Finch and Millerbird
Gorresen, P. Marcos; Brinck, Kevin W.; Camp, Richard J.; Farmer, Chris; Plentovich, Sheldon M.; Banko, Paul C.
2016-01-01
Both of the 2 passerines endemic to Nihoa Island, Hawai‘i, USA—the Nihoa Millerbird (Acrocephalus familiaris kingi) and Nihoa Finch (Telespiza ultima)—are listed as endangered by federal and state agencies. Their abundances have been estimated by irregularly implemented fixed-width strip-transect sampling from 1967 to 2012, from which area-based extrapolation of the raw counts produced highly variable abundance estimates for both species. To evaluate an alternative survey method and improve abundance estimates, we conducted variable-distance point-transect sampling between 2010 and 2014. We compared our results to those obtained from strip-transect samples. In addition, we applied state-space models to derive improved estimates of population size and trends from the legacy time series of strip-transect counts. Both species were fairly evenly distributed across Nihoa and occurred in all or nearly all available habitat. Population trends for Nihoa Millerbird were inconclusive because of high within-year variance. Trends for Nihoa Finch were positive, particularly since the early 1990s. Distance-based analysis of point-transect counts produced mean estimates of abundance similar to those from strip-transects but was generally more precise. However, both survey methods produced biologically unrealistic variability between years. State-space modeling of the long-term time series of abundances obtained from strip-transect counts effectively reduced uncertainty in both within- and between-year estimates of population size, and allowed short-term changes in abundance trajectories to be smoothed into a long-term trend.
Cup Implant Planning Based on 2-D/3-D Radiographic Pelvis Reconstruction-First Clinical Results.
Schumann, Steffen; Sato, Yoshinobu; Nakanishi, Yuki; Yokota, Futoshi; Takao, Masaki; Sugano, Nobuhiko; Zheng, Guoyan
2015-11-01
In the following, we will present a newly developed X-ray calibration phantom and its integration for 2-D/3-D pelvis reconstruction and subsequent automatic cup planning. Two different planning strategies were applied and evaluated with clinical data. Two different cup planning methods were investigated: The first planning strategy is based on a combined pelvis and cup statistical atlas. Thereby, the pelvis part of the combined atlas is matched to the reconstructed pelvis model, resulting in an optimized cup planning. The second planning strategy analyzes the morphology of the reconstructed pelvis model to determine the best fitting cup implant. The first planning strategy was compared to 3-D CT-based planning. Digitally reconstructed radiographs of THA patients with differently severe pathologies were used to evaluate the accuracy of predicting the cup size and position. Within a discrepancy of one cup size, the size was correctly identified in 100% of the cases for Crowe type I datasets and in 77.8% of the cases for Crowe type II, III, and IV datasets. The second planning strategy was analyzed with respect to the eventually implanted cup size. In seven patients, the estimated cup diameter was correct within one cup size, while the estimation for the remaining five patients differed by two cup sizes. While both planning strategies showed the same prediction rate with a discrepancy of one cup size (87.5%), the prediction of the exact cup size was increased for the statistical atlas-based strategy (56%) in contrast to the anatomically driven approach (37.5%). The proposed approach demonstrated the clinical validity of using 2-D/3-D reconstruction technique for cup planning.
Estimation of anomaly location and size using electrical impedance tomography.
Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu
2003-01-01
We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.
Home range characteristics of Mexican Spotted Owls in the canyonlands of Utah
Willey, D.W.; van Riper, Charles
2007-01-01
We studied home-range characteristics of adult Mexican Spotted Owls (Strix occidentalis lucida) in southern Utah. Twenty-eight adult owls were radio-tracked using a ground-based telemetry system during 1991-95. Five males and eight females molted tail feathers and dropped transmitters within 4 wk. We estimated cumulative home ranges for 15 Spotted Owls (12 males, 3 females). The mean estimate of cumulative home-range size was not statistically different between the minimum convex polygon and adaptive kernel (AK) 95% isopleth. Both estimators yielded relatively high SD, and male and female range sizes varied widely. For 12 owls tracked during both the breeding and nonbreeding seasons, the mean size of the AK 95% nonbreeding home range was 49% larger than the breeding home-range size. The median AK 75% bome-range isopleth (272 ha) we observed was similar in size to Protected Activity Centers (PACs) recommended by a recovery team. Our results lend support to the PAC concept and we support continued use of PACs to conserve Spotted Owl habitat in Utah. ?? 2007 The Raptor Research Foundation, Inc.
A standardized mean difference effect size for multiple baseline designs across individuals.
Hedges, Larry V; Pustejovsky, James E; Shadish, William R
2013-12-01
Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the standardized mean difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)(k) design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications. Copyright © 2013 John Wiley & Sons, Ltd.
Pérez-Figueroa, A; Fernández, C; Amaro, R; Hermida, M; San Miguel, E
2015-08-01
Variability at 20 microsatellite loci was examined to assess the population genetic structure, gene flow, and effective population size (N(e)) in three populations of three-spined stickleback (Gasterosteus aculeatus) from the upper basin of the Miño River in Galicia, NW Spain, where this species is threatened. The three populations showed similar levels of genetic diversity. There is a significant genetic differentiation between the three populations, but also significant gene flow. N(e) estimates based on linkage disequilibrium yielded values of 355 for the Miño River population and 241 and 311 for the Rato and Guisande Rivers, respectively, although we expect that these are overestimates. N(e) estimates based on temporal methods, considering gene flow or not, for the tributaries yielded values of 30-56 and 47-56 for the Rato and Guisande Rivers, respectively. Estimated census size (N(c)) for the Rato River was 880 individuals. This yielded a N(e)/N(c) estimate of 3-6 % for temporal estimation of N(e), which is within the empirical range observed in freshwater fishes. We suggest that the three populations analyzed have a sufficient level of genetic diversity with some genetic structure. Additionally, the absence of physical barriers suggests that conservation efforts and monitoring should focus in the whole basin as a unit.
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING
A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...
Rico, María; Andrés-Costa, María Jesús; Picó, Yolanda
2017-02-05
Wastewater can provide a wealth of epidemiologic data on common drugs consumed and on health and nutritional problems based on the biomarkers excreted into community sewage systems. One of the biggest uncertainties of these studies is the estimation of the number of inhabitants served by the treatment plants. Twelve human urine biomarkers -5-hydroxyindoleacetic acid (5-HIAA), acesulfame, atenolol, caffeine, carbamazepine, codeine, cotinine, creatinine, hydrochlorothiazide (HCTZ), naproxen, salicylic acid (SA) and hydroxycotinine (OHCOT)- were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to estimate population size. The results reveal that populations calculated from cotinine, 5-HIAA and caffeine are commonly in agreement with those calculated by the hydrochemical parameters. Creatinine is too unstable to be applicable. HCTZ, naproxen, codeine, OHCOT and carbamazepine, under or overestimate the population compared to the hydrochemical population estimates but showed constant results through the weekdays. The consumption of cannabis, cocaine, heroin and bufotenine in Valencia was estimated for a week using different population calculations. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan
2012-01-01
The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.
Almiron-Roig, Eva; Aitken, Amanda; Galloway, Catherine
2017-01-01
Context: Dietary assessment in minority ethnic groups is critical for surveillance programs and for implementing effective interventions. A major challenge is the accurate estimation of portion sizes for traditional foods and dishes. Objective: The aim of this systematic review was to assess records published up to 2014 describing a portion-size estimation element (PSEE) applicable to the dietary assessment of UK-residing ethnic minorities. Data sources, selection, and extraction: Electronic databases, internet sites, and theses repositories were searched, generating 5683 titles, from which 57 eligible full-text records were reviewed. Data analysis: Forty-two publications about minority ethnic groups (n = 20) or autochthonous populations (n = 22) were included. The most common PSEEs (47%) were combination tools (eg, food models and portion-size lists), followed by portion-size lists in questionnaires/guides (19%) and image-based and volumetric tools (17% each). Only 17% of PSEEs had been validated against weighed data. Conclusions: When developing ethnic-specific dietary assessment tools, it is important to consider customary portion sizes by sex and age, traditional household utensil usage, and population literacy levels. Combining multiple PSEEs may increase accuracy, but such methods require validation. PMID:28340101
A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions
Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.
2005-01-01
Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.
The Petersen-Lincoln estimator and its extension to estimate the size of a shared population.
Chao, Anne; Pan, H-Y; Chiang, Shu-Chuan
2008-12-01
The Petersen-Lincoln estimator has been used to estimate the size of a population in a single mark release experiment. However, the estimator is not valid when the capture sample and recapture sample are not independent. We provide an intuitive interpretation for "independence" between samples based on 2 x 2 categorical data formed by capture/non-capture in each of the two samples. From the interpretation, we review a general measure of "dependence" and quantify the correlation bias of the Petersen-Lincoln estimator when two types of dependences (local list dependence and heterogeneity of capture probability) exist. An important implication in the census undercount problem is that instead of using a post enumeration sample to assess the undercount of a census, one should conduct a prior enumeration sample to avoid correlation bias. We extend the Petersen-Lincoln method to the case of two populations. This new estimator of the size of the shared population is proposed and its variance is derived. We discuss a special case where the correlation bias of the proposed estimator due to dependence between samples vanishes. The proposed method is applied to a study of the relapse rate of illicit drug use in Taiwan. ((c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim).
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Aditya, Kaustav; Sud, U. C.
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202
Chandra, Hukum; Aditya, Kaustav; Sud, U C
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.
The interrupted power law and the size of shadow banking.
Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio
2014-01-01
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is "interrupted" by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an "interrupted" Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate-which we propose as a shadow banking index-compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity.
Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan
2017-02-01
To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Using flow cytometry to estimate pollen DNA content: improved methodology and applications
Kron, Paul; Husband, Brian C.
2012-01-01
Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815
In vivo lateral blood flow velocity measurement using speckle size estimation.
Xu, Tiantian; Hozan, Mohsen; Bashford, Gregory R
2014-05-01
In previous studies, we proposed blood measurement using speckle size estimation, which estimates the lateral component of blood flow within a single image frame based on the observation that the speckle pattern corresponding to blood reflectors (typically red blood cells) stretches (i.e., is "smeared") if blood flow is in the same direction as the electronically controlled transducer line selection in a 2-D image. In this observational study, the clinical viability of ultrasound blood flow velocity measurement using speckle size estimation was investigated and compared with that of conventional spectral Doppler of carotid artery blood flow data collected from human patients in vivo. Ten patients (six male, four female) were recruited. Right carotid artery blood flow data were collected in an interleaved fashion (alternating Doppler and B-mode A-lines) with an Antares Ultrasound Imaging System and transferred to a PC via the Axius Ultrasound Research Interface. The scanning velocity was 77 cm/s, and a 4-s interval of flow data were collected from each subject to cover three to five complete cardiac cycles. Conventional spectral Doppler data were collected simultaneously to compare with estimates made by speckle size estimation. The results indicate that the peak systolic velocities measured with the two methods are comparable (within ±10%) if the scan velocity is greater than or equal to the flow velocity. When scan velocity is slower than peak systolic velocity, the speckle stretch method asymptotes to the scan velocity. Thus, the speckle stretch method is able to accurately measure pure lateral flow, which conventional Doppler cannot do. In addition, an initial comparison of the speckle size estimation and color Doppler methods with respect to computational complexity and data acquisition time indicated potential time savings in blood flow velocity estimation using speckle size estimation. Further studies are needed for calculation of the speckle stretch method across a field of view and combination with an appropriate axial flow estimator. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Two ways of estimating the euro value of the illicit market for cannabis in France.
Legleye, Stephane; Ben Lakhdar, Christian; Spilka, Stanislas
2008-09-01
The most recent health surveys in general population are used in order to estimate the annual market size for cannabis in France in 2005. Two methods for arriving at an estimate are proposed: the first based on reported consumption, the other on reported expenditure on cannabis. The annual sales figure for cannabis in France is between 746 and 832 million euro. Men's expenditure accounts for between 80 and 85% of total expenditure and those aged between 15 and 24 years account for the greatest part of the size of the cannabis market, between 57 and 60%, depending upon the method. According to these estimates, consumers' average annual expenditure on cannabis is around euro 202 in France, compared to estimates obtained for New Zealand and Holland (euro 124) and the United States (euro 362).
SpotCaliper: fast wavelet-based spot detection with accurate size estimation.
Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael
2016-04-15
SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Loanwords and Vocabulary Size Test Scores: A Case of Different Estimates for Different L1 Learners
ERIC Educational Resources Information Center
Laufer, Batia; McLean, Stuart
2016-01-01
The article investigated how the inclusion of loanwords in vocabulary size tests affected the test scores of two L1 groups of EFL learners: Hebrew and Japanese. New BNC- and COCA-based vocabulary size tests were constructed in three modalities: word form recall, word form recognition, and word meaning recall. Depending on the test modality, the…
O'Brien, Jake William; Banks, Andrew Phillip William; Novic, Andrew Joseph; Mueller, Jochen F; Jiang, Guangming; Ort, Christoph; Eaglesham, Geoff; Yuan, Zhiguo; Thai, Phong K
2017-04-04
A key uncertainty of wastewater-based epidemiology is the size of the population which contributed to a given wastewater sample. We previously developed and validated a Bayesian inference model to estimate population size based on 14 population markers which: (1) are easily measured and (2) have mass loads which correlate with population size. However, the potential uncertainty of the model prediction due to in-sewer degradation of these markers was not evaluated. In this study, we addressed this gap by testing their stability under sewer conditions and assessed whether degradation impacts the model estimates. Five markers, which formed the core of our model, were stable in the sewers while the others were not. Our evaluation showed that the presence of unstable population markers in the model did not decrease the precision of the population estimates providing that stable markers such as acesulfame remained in the model. However, to achieve the minimum uncertainty in population estimates, we propose that the core markers to be included in population models for other sites should meet two additional criteria: (3) negligible degradation in wastewater to ensure the stability of chemicals during collection; and (4) < 10% in-sewer degradation could occur during the mean residence time of the sewer network.
Sampling effort and estimates of species richness based on prepositioned area electrofisher samples
Bowen, Z.H.; Freeman, Mary C.
1998-01-01
Estimates of species richness based on electrofishing data are commonly used to describe the structure of fish communities. One electrofishing method for sampling riverine fishes that has become popular in the last decade is the prepositioned area electrofisher (PAE). We investigated the relationship between sampling effort and fish species richness at seven sites in the Tallapoosa River system, USA based on 1,400 PAE samples collected during 1994 and 1995. First, we estimated species richness at each site using the first-order jackknife and compared observed values for species richness and jackknife estimates of species richness to estimates based on historical collection data. Second, we used a permutation procedure and nonlinear regression to examine rates of species accumulation. Third, we used regression to predict the number of PAE samples required to collect the jackknife estimate of species richness at each site during 1994 and 1995. We found that jackknife estimates of species richness generally were less than or equal to estimates based on historical collection data. The relationship between PAE electrofishing effort and species richness in the Tallapoosa River was described by a positive asymptotic curve as found in other studies using different electrofishing gears in wadable streams. Results from nonlinear regression analyses indicted that rates of species accumulation were variable among sites and between years. Across sites and years, predictions of sampling effort required to collect jackknife estimates of species richness suggested that doubling sampling effort (to 200 PAEs) would typically increase observed species richness by not more than six species. However, sampling effort beyond about 60 PAE samples typically increased observed species richness by < 10%. We recommend using historical collection data in conjunction with a preliminary sample size of at least 70 PAE samples to evaluate estimates of species richness in medium-sized rivers. Seventy PAE samples should provide enough information to describe the relationship between sampling effort and species richness and thus facilitate evaluation of a sampling effort.
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
How Much Can Remotely-Sensed Natural Resource Inventories Benefit from Finer Spatial Resolutions?
NASA Astrophysics Data System (ADS)
Hou, Z.; Xu, Q.; McRoberts, R. E.; Ståhl, G.; Greenberg, J. A.
2017-12-01
For remote sensing facilitated natural resource inventories, the effects of spatial resolution in the form of pixel size and the effects of subpixel information on estimates of population parameters were evaluated by comparing results obtained using Landsat 8 and RapidEye auxiliary imagery. The study area was in Burkina Faso, and the variable of interest was the stem volume (m3/ha) convertible to the woodland aboveground biomass. A sample consisting of 160 field plots was selected and measured from the population following a two-stage sampling design. Models were fit using weighted least squares; the population mean, mu, and the variance of the estimator of the population mean, Var(mu.hat), were estimated in two inferential frameworks, model-based and model-assisted, and compared; for each framework, Var(mu.hat) was estimated both analytically and empirically. Empirical variances were estimated with bootstrapping that for resampling takes clustering effects into account. The primary results were twofold. First, for the effects of spatial resolution and subpixel information, four conclusions are relevant: (1) finer spatial resolution imagery indeed contributes to greater precision for estimators of population parameter, but this increase is slight at a maximum rate of 20% considering that RapidEye data are 36 times finer resolution than Landsat 8 data; (2) subpixel information on texture is marginally beneficial when it comes to making inference for population of large areas; (3) cost-effectiveness is more favorable for the free of charge Landsat 8 imagery than RapidEye imagery; and (4) for a given plot size, candidate remote sensing auxiliary datasets are more cost-effective when their spatial resolutions are similar to the plot size than with much finer alternatives. Second, for the comparison between estimators, three conclusions are relevant: (1) model-based variance estimates are consistent with each other and about half as large as stabilized model-assisted estimates, suggesting superior effectiveness of model-based inference to model-assisted inference; (2) bootstrapping is an effective alternative to analytical variance estimators; and (3) prediction accuracy expressed by RMSE is useful for screening candidate models to be used for population inferences.
Lyons, James E.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Andres, Brad A.; Buchanan, Joseph B.
2016-01-01
We present a novel formulation of a mark–recapture–resight model that allows estimation of population size, stopover duration, and arrival and departure schedules at migration areas. Estimation is based on encounter histories of uniquely marked individuals and relative counts of marked and unmarked animals. We use a Bayesian analysis of a state–space formulation of the Jolly–Seber mark–recapture model, integrated with a binomial model for counts of unmarked animals, to derive estimates of population size and arrival and departure probabilities. We also provide a novel estimator for stopover duration that is derived from the latent state variable representing the interim between arrival and departure in the state–space model. We conduct a simulation study of field sampling protocols to understand the impact of superpopulation size, proportion marked, and number of animals sampled on bias and precision of estimates. Simulation results indicate that relative bias of estimates of the proportion of the population with marks was low for all sampling scenarios and never exceeded 2%. Our approach does not require enumeration of all unmarked animals detected or direct knowledge of the number of marked animals in the population at the time of the study. This provides flexibility and potential application in a variety of sampling situations (e.g., migratory birds, breeding seabirds, sea turtles, fish, pinnipeds, etc.). Application of the methods is demonstrated with data from a study of migratory sandpipers.
Modelling size-fractionated primary production in the Atlantic Ocean from remote sensing
NASA Astrophysics Data System (ADS)
Brewin, Robert J. W.; Tilstone, Gavin H.; Jackson, Thomas; Cain, Terry; Miller, Peter I.; Lange, Priscila K.; Misra, Ankita; Airs, Ruth L.
2017-11-01
Marine primary production influences the transfer of carbon dioxide between the ocean and atmosphere, and the availability of energy for the pelagic food web. Both the rate and the fate of organic carbon from primary production are dependent on phytoplankton size. A key aim of the Atlantic Meridional Transect (AMT) programme has been to quantify biological carbon cycling in the Atlantic Ocean and measurements of total primary production have been routinely made on AMT cruises, as well as additional measurements of size-fractionated primary production on some cruises. Measurements of total primary production collected on the AMT have been used to evaluate remote-sensing techniques capable of producing basin-scale estimates of primary production. Though models exist to estimate size-fractionated primary production from satellite data, these have not been well validated in the Atlantic Ocean, and have been parameterised using measurements of phytoplankton pigments rather than direct measurements of phytoplankton size structure. Here, we re-tune a remote-sensing primary production model to estimate production in three size fractions of phytoplankton (<2 μm, 2-10 μm and >10 μm) in the Atlantic Ocean, using measurements of size-fractionated chlorophyll and size-fractionated photosynthesis-irradiance experiments conducted on AMT 22 and 23 using sequential filtration-based methods. The performance of the remote-sensing technique was evaluated using: (i) independent estimates of size-fractionated primary production collected on a number of AMT cruises using 14C on-deck incubation experiments and (ii) Monte Carlo simulations. Considering uncertainty in the satellite inputs and model parameters, we estimate an average model error of between 0.27 and 0.63 for log10-transformed size-fractionated production, with lower errors for the small size class (<2 μm), higher errors for the larger size classes (2-10 μm and >10 μm), and errors generally higher in oligotrophic waters. Application to satellite data in 2007 suggests the contribution of cells <2 μm and >2 μm to total primary production is approximately equal in the Atlantic Ocean.
Density estimates of monarch butterflies overwintering in central Mexico
Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031
Density estimates of monarch butterflies overwintering in central Mexico
Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence
Claggett, B.; Lagakos, S.W.; Wang, R.
2011-01-01
Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904
Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.
Claggett, B; Lagakos, S W; Wang, R
2012-03-01
Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun
2014-09-01
In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.
Singh, Warsha; Örnólfsdóttir, Erla B.; Stefansson, Gunnar
2014-01-01
An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was and deg that resulted in error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately cm was seen, which could be attributed to pixel error, where each pixel represented cm. After correcting for this difference the estimated heights ranged from cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region. PMID:25303243
Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar
2014-01-01
An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.
Lincoln estimates of mallard (Anas platyrhynchos) abundance in North America.
Alisauskas, Ray T; Arnold, Todd W; Leafloor, James O; Otis, David L; Sedinger, James S
2014-01-01
Estimates of range-wide abundance, harvest, and harvest rate are fundamental for sound inferences about the role of exploitation in the dynamics of free-ranging wildlife populations, but reliability of existing survey methods for abundance estimation is rarely assessed using alternative approaches. North American mallard populations have been surveyed each spring since 1955 using internationally coordinated aerial surveys, but population size can also be estimated with Lincoln's method using banding and harvest data. We estimated late summer population size of adult and juvenile male and female mallards in western, midcontinent, and eastern North America using Lincoln's method of dividing (i) total estimated harvest, [Formula: see text], by estimated harvest rate, [Formula: see text], calculated as (ii) direct band recovery rate, [Formula: see text], divided by the (iii) band reporting rate, [Formula: see text]. Our goal was to compare estimates based on Lincoln's method with traditional estimates based on aerial surveys. Lincoln estimates of adult males and females alive in the period June-September were 4.0 (range: 2.5-5.9), 1.8 (range: 0.6-3.0), and 1.8 (range: 1.3-2.7) times larger than respective aerial survey estimates for the western, midcontinent, and eastern mallard populations, and the two population estimates were only modestly correlated with each other (western: r = 0.70, 1993-2011; midcontinent: r = 0.54, 1961-2011; eastern: r = 0.50, 1993-2011). Higher Lincoln estimates are predictable given that the geographic scope of inference from Lincoln estimates is the entire population range, whereas sampling frames for aerial surveys are incomplete. Although each estimation method has a number of important potential biases, our review suggests that underestimation of total population size by aerial surveys is the most likely explanation. In addition to providing measures of total abundance, Lincoln's method provides estimates of fecundity and population sex ratio and could be used in integrated population models to provide greater insights about population dynamics and management of North American mallards and most other harvested species.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Using Grain-Size Distribution Methods for Estimation of Air Permeability.
Wang, Tiejun; Huang, Yuanyang; Chen, Xunhong; Chen, Xi
2016-01-01
Knowledge of air permeability (ka ) at dry conditions is critical for the use of air flow models in porous media; however, it is usually difficult and time consuming to measure ka at dry conditions. It is thus desirable to estimate ka at dry conditions from other readily obtainable properties. In this study, the feasibility of using information derived from grain-size distributions (GSDs) for estimating ka at dry conditions was examined. Fourteen GSD-based equations originally developed for estimating saturated hydraulic conductivity were tested using ka measured at dry conditions in both undisturbed and disturbed river sediment samples. On average, the estimated ka from all the equations, except for the method of Slichter, differed by less than ± 4 times from the measured ka for both undisturbed and disturbed groups. In particular, for the two sediment groups, the results given by the methods of Terzaghi and Hazen-modified were comparable to the measured ka . In addition, two methods (e.g., Barr and Beyer) for the undisturbed samples and one method (e.g., Hazen-original) for the undisturbed samples were also able to produce comparable ka estimates. Moreover, after adjusting the values of the coefficient C in the GSD-based equations, the estimation of ka was significantly improved with the differences between the measured and estimated ka less than ±4% on average (except for the method of Barr). As demonstrated by this study, GSD-based equations may provide a promising and efficient way to estimate ka at dry conditions. © 2015, National Ground Water Association.
Low is large: spatial location and pitch interact in voice-based body size estimation.
Pisanski, Katarzyna; Isenstein, Sari G E; Montano, Kelyn J; O'Connor, Jillian J M; Feinberg, David R
2017-05-01
The binding of incongruent cues poses a challenge for multimodal perception. Indeed, although taller objects emit sounds from higher elevations, low-pitched sounds are perceptually mapped both to large size and to low elevation. In the present study, we examined how these incongruent vertical spatial cues (up is more) and pitch cues (low is large) to size interact, and whether similar biases influence size perception along the horizontal axis. In Experiment 1, we measured listeners' voice-based judgments of human body size using pitch-manipulated voices projected from a high versus a low, and a right versus a left, spatial location. Listeners associated low spatial locations with largeness for lowered-pitch but not for raised-pitch voices, demonstrating that pitch overrode vertical-elevation cues. Listeners associated rightward spatial locations with largeness, regardless of voice pitch. In Experiment 2, listeners performed the task while sitting or standing, allowing us to examine self-referential cues to elevation in size estimation. Listeners associated vertically low and rightward spatial cues with largeness more for lowered- than for raised-pitch voices. These correspondences were robust to sex (of both the voice and the listener) and head elevation (standing or sitting); however, horizontal correspondences were amplified when participants stood. Moreover, when participants were standing, their judgments of how much larger men's voices sounded than women's increased when the voices were projected from the low speaker. Our results provide novel evidence for a multidimensional spatial mapping of pitch that is generalizable to human voices and that affects performance in an indirect, ecologically relevant spatial task (body size estimation). These findings suggest that crossmodal pitch correspondences evoke both low-level and higher-level cognitive processes.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Austin, Åsa; Bergström, Ulf; Donadi, Serena; Eriksson, Britas D.H.K.; Hansen, Joakim; Sundblad, Göran
2017-01-01
Background Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter ‘DM’ vs. ‘AFDM’) per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh) mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass) is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell)—and therefore, also AFDM/DM ratios—may change with body size, as previously shown for taxa like spiders, vertebrates and trees. Methods We collected aquatic, epibenthic macroinvertebrates (>1 mm) in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm), body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. Results For most taxa, non-linear regression models describing the power relationship between body size and (i) DM and (ii) AFDM fit the data well (as indicated by low SE and high R2). Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs), body size had a negative influence on organism AFDM/DM ratios. Discussion The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate organism dry- and ash-free dry mass based on body size, thereby freeing up considerable work resources. However, the considerable differences in constants between taxa emphasize the need for taxon-specific relationships, and the potential dangers associated with ignoring body size. The negative influence of body size on the AFDM/DM ratio found in a majority of the molluscs could be caused by increasingly thicker shells with organism age, and/or spawning-induced loss of biologically active tissue in adults. Consequently, future studies utilizing AFDM/DM (and presumably also AFDM/wet mass) ratios should carefully assess the potential influence of body size to ensure more reliable estimates of organism body mass. PMID:28149685
Eklöf, Johan; Austin, Åsa; Bergström, Ulf; Donadi, Serena; Eriksson, Britas D H K; Hansen, Joakim; Sundblad, Göran
2017-01-01
Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter 'DM' vs. 'AFDM') per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh) mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass) is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell)-and therefore, also AFDM/DM ratios-may change with body size, as previously shown for taxa like spiders, vertebrates and trees. We collected aquatic, epibenthic macroinvertebrates (>1 mm) in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm), body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. For most taxa, non-linear regression models describing the power relationship between body size and (i) DM and (ii) AFDM fit the data well (as indicated by low SE and high R 2 ). Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs), body size had a negative influence on organism AFDM/DM ratios. The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate organism dry- and ash-free dry mass based on body size, thereby freeing up considerable work resources. However, the considerable differences in constants between taxa emphasize the need for taxon-specific relationships, and the potential dangers associated with ignoring body size. The negative influence of body size on the AFDM/DM ratio found in a majority of the molluscs could be caused by increasingly thicker shells with organism age, and/or spawning-induced loss of biologically active tissue in adults. Consequently, future studies utilizing AFDM/DM (and presumably also AFDM/wet mass) ratios should carefully assess the potential influence of body size to ensure more reliable estimates of organism body mass.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
The minimal important difference of exercise tests in severe COPD
Puhan, M.A.; Chandra, D.; Mosenifar, Z.; Ries, A.; Make, B.; Hansel, N.N.; Wise, R.A.; Sciurba, F.
2017-01-01
Our aim was to determine the minimal important difference (MID) for 6-min walk distance (6MWD) and maximal cycle exercise capacity (MCEC) in patients with severe chronic obstructive pulmonary disease (COPD). 1,218 patients enrolled in the National Emphysema Treatment Trial completed exercise tests before and after 4–6 weeks of pre-trial rehabilitation, and 6 months after randomisation to surgery or medical care. The St George’s Respiratory Questionnaire (domain and total scores) and University of California San Diego Shortness of Breath Questionnaire (total score) served as anchors for anchor-based MID estimates. In order to calculate distribution-based estimates, we used the standard error of measurement, Cohen’s effect size and the empirical rule effect size. Anchor-based estimates for the 6MWD were 18.9 m (95% CI 18.1–20.1 m), 24.2 m (95% CI 23.4–25.4 m), 24.6 m (95% CI 23.4–25.7 m) and 26.4 m (95% CI 25.4–27.4 m), which were similar to distribution-based MID estimates of 25.7, 26.8 and 30.6 m. For MCEC, anchor-based estimates for the MID were 2.2 W (95% CI 2.0–2.4 W), 3.2 W (95% CI 3.0–3.4 W), 3.2 W (95% CI 3.0–3.4 W) and 3.3 W (95% CI 3.0–3.5 W), while distribution-based estimates were 5.3 and 5.5 W. We suggest a MID of 26±2 m for 6MWD and 4±1 W for MCEC for patients with severe COPD. PMID:20693247
Location- and lesion-dependent estimation of mammographic background tissue complexity.
Avanaki, Ali; Espig, Kathryn; Kimpe, Tom
2017-01-01
We specify a notion of perceived background tissue complexity (BTC) that varies with lesion shape, lesion size, and lesion location in the image. We propose four unsupervised BTC estimators based on: perceived pre and postlesion similarity of images, lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), tissue anomaly detection, and local energy. The latter two are existing methods adapted for location- and lesion-dependent BTC estimation. For evaluation, we ask human observers to measure BTC (threshold visibility amplitude of a given lesion inserted) at specified locations in a mammogram. As expected, both human measured and computationally estimated BTC vary with lesion shape, size, and location. BTCs measured by different human observers are correlated ([Formula: see text]). BTC estimators are correlated to each other ([Formula: see text]) and less so to human observers ([Formula: see text]). With change in lesion shape or size, LBA estimated BTC changes in the same direction as human measured BTC. Proposed estimators can be generalized to other modalities (e.g., breast tomosynthesis) and used as-is or customized to a specific human observer, to construct BTC-aware model observers with applications, such as optimization of contrast-enhanced medical imaging systems and creation of a diversified image dataset with characteristics of a desired population.
Location- and lesion-dependent estimation of mammographic background tissue complexity
Avanaki, Ali; Espig, Kathryn; Kimpe, Tom
2017-01-01
Abstract. We specify a notion of perceived background tissue complexity (BTC) that varies with lesion shape, lesion size, and lesion location in the image. We propose four unsupervised BTC estimators based on: perceived pre and postlesion similarity of images, lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), tissue anomaly detection, and local energy. The latter two are existing methods adapted for location- and lesion-dependent BTC estimation. For evaluation, we ask human observers to measure BTC (threshold visibility amplitude of a given lesion inserted) at specified locations in a mammogram. As expected, both human measured and computationally estimated BTC vary with lesion shape, size, and location. BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are correlated to each other (0.84<ρ<0.95) and less so to human observers (ρ≤0.81). With change in lesion shape or size, LBA estimated BTC changes in the same direction as human measured BTC. Proposed estimators can be generalized to other modalities (e.g., breast tomosynthesis) and used as-is or customized to a specific human observer, to construct BTC-aware model observers with applications, such as optimization of contrast-enhanced medical imaging systems and creation of a diversified image dataset with characteristics of a desired population. PMID:28097214
A Model-Based Approach to Inventory Stratification
Ronald E. McRoberts
2006-01-01
Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities to counties to States and Provinces. Classified satellite imagery has been shown to be an effective source of ancillary data that, when used with stratified estimation techniques, contributes to increased precision with little corresponding increase...
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat
Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.
2016-01-01
Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.
Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.
Near-Earth-object survey progress and population of small near-Earth asteroids
NASA Astrophysics Data System (ADS)
Harris, A.
2014-07-01
Estimating the total population vs. size of NEAs and the completion of surveys is the same thing since the total population is just the number discovered divided by the estimated completion. I review the method of completion estimation based on ratio of re-detected objects to total detections (known plus new discoveries). The method is quite general and can be used for population estimations of all sorts, from wildlife to various classes of solar system bodies. Since 2001, I have been making estimates of population and survey progress approximately every two years. Plotted below, left, is my latest estimate, including NEA discoveries up to August, 2012. I plan to present an update at the meeting. All asteroids of a given size are not equally easy to detect because of specific orbital geometries. Thus a model of the orbital distribution is necessary, and computer simulations using those orbits need to establish the relation between the raw re-detection ratio and the actual completion fraction. This can be done for any sub-group population, allowing to estimate the population of a subgroup and the expected current completion. Once a reliable survey computer model has been developed and ''calibrated'' with respect to actual survey re-detections versus size, it can be extrapolated to smaller sizes to estimate completion even at very small size where re-detections are rare or even zero. I have recently investigated the subgroup of extremely low encounter velocity NEAs, the class of interest for the Asteroid Redirect Mission (ARM), recently proposed by NASA. I found that asteroids of diameter ˜ 10 m with encounter velocity with the Earth lower than 2.5 km/sec are detected by current surveys nearly 1,000 times more efficiently than the general background of NEAs of that size. Thus the current completion of these slow relative velocity objects may be around 1%, compared to 10^{-6} for that size objects of the general velocity distribution. Current surveys are nowhere near complete, but there may be fewer such objects than have been suggested. This conclusion is reinforced by the fact that at least a couple such discovered objects are known to be not real asteroids but spent rocket bodies in heliocentric orbit, of which there are only of the order of a hundred. Brown et al. (Nature 503, 238-241, 2013, below right, green squares are a re-plot of my blue circles on left plot) recently suggested that the population of small NEAs in the size range from roughly 5 to 50 meters in diameter may have been substantially under-estimated. To be sure, the greatest uncertainty in population estimates is in that range, since there are very few bolide events to use for estimation, and the surveys are extremely incomplete in that size range, so a factor of 3 or so discrepancy is not significant. However, the population estimated from surveys carried still smaller, where the bolide frequency becomes more secure, disagrees from the bolide estimate by even less than a factor of 3 and in fact intersects at about 3 m diameter. On the other hand, the shallow-sloping size-frequency distribution derived from the sparse large bolide data diverges badly from the survey estimates, in sizes where the survey estimates become ever-increasingly reliable, even by 100-200 m diameter. It appears that the bolide data provides a good "anchor" of the population in the size range up to about 5 m diameter, but above that one might do better just connecting that population with a straight line (on a log-log plot) with the survey-determined population at larger size, 50-100 m diameter or so.
ServAR: An augmented reality tool to guide the serving of food.
Rollo, Megan E; Bucher, Tamara; Smith, Shamus P; Collins, Clare E
2017-05-12
Accurate estimation of food portion size is a difficult task. Visual cues are important mediators of portion size and therefore technology-based aids may assist consumers when serving and estimating food portions. The current study evaluated the usability and impact on estimation error of standard food servings of a novel augmented reality food serving aid, ServAR. Participants were randomised into one of three groups: 1) no information/aid (control); 2) verbal information on standard serving sizes; or 3) ServAR, an aid which overlayed virtual food servings over a plate using a tablet computer. Participants were asked to estimate the standard serving sizes of nine foods (broccoli, carrots, cauliflower, green beans, kidney beans, potato, pasta, rice, and sweetcorn) using validated food replicas. Wilcoxon signed-rank tests compared median served weights of each food to reference standard serving size weights. Percentage error was used to compare the estimation of serving size accuracy between the three groups. All participants also performed a usability test using the ServAR tool to guide the serving of one randomly selected food. Ninety adults (78.9% female; a mean (95%CI) age 25.8 (24.9-26.7) years; BMI 24.2 (23.2-25.2) kg/m 2 ) completed the study. The median servings were significantly different to the reference portions for five foods in the ServAR group, compared to eight foods in the information only group and seven foods for the control group. The cumulative proportion of total estimations per group within ±10%, ±25% and ±50% of the reference portion was greater for those using ServAR (30.7, 65.2 and 90.7%; respectively), compared to the information only group (19.6, 47.4 and 77.4%) and control group (10.0, 33.7 and 68.9%). Participants generally found the ServAR tool easy to use and agreed that it showed potential to support optimal portion size selection. However, some refinements to the ServAR tool are required to improve the user experience. Use of the augmented reality tool improved accuracy and consistency of estimating standard serve sizes compared to the information only and control conditions. ServAR demonstrates potential as a practical tool to guide the serving of food. Further evaluation across a broad range of foods, portion sizes and settings is warranted.
A conceptual guide to detection probability for point counts and other count-based survey methods
D. Archibald McCallum
2005-01-01
Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...
Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen
2012-01-01
Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
Nilsen, Erlend B; Strand, Olav
2018-01-01
We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.
Mądra-Bielewicz, Anna; Frątczak-Łagiewska, Katarzyna; Matuszewski, Szymon
2017-09-01
The estimation of postmortem interval (PMI) based on successional patterns of adult insects is largely limited, due to the lack of potential PMI markers. Sex and size of adult insects could be easily used for such estimation. In this study, sex- and size-related patterns of carrion attendance by adult insects were analyzed in Necrodes littoralis (Coleoptera: Silphidae) and Creophilus maxillosus (Coleoptera: Staphylinidae). For both species, abundance of males and females changed similarly during decomposition. A slightly female-biased sex ratio was recorded in N. littoralis. Females of N. littoralis started visiting carcasses, on average, one day earlier than males. There was a rise in size of males of N. littoralis at the end of decomposition, whereas for females of both species and males of C. maxillosus, no size-related patterns of carrion visitation were found. Current results demonstrate that size and sex of adult carrion beetles are poor indicators of PMI. © 2016 American Academy of Forensic Sciences.
Association between inaccurate estimation of body size and obesity in schoolchildren.
Costa, Larissa da Cunha Feio; Silva, Diego Augusto Santos; Almeida, Sebastião de Sousa; de Vasconcelos, Francisco de Assis Guedes
2015-01-01
To investigate the prevalence of inaccurate estimation of own body size among Brazilian schoolchildren of both sexes aged 7-10 years, and to test whether overweight/obesity; excess body fat and central obesity are associated with inaccuracy. Accuracy of body size estimation was assessed using the Figure Rating Scale for Brazilian Children. Multinomial logistic regression was used to analyze associations. The overall prevalence of inaccurate body size estimation was 76%, with 34% of the children underestimating their body size and 42% overestimating their body size. Obesity measured by body mass index was associated with underestimation of body size in both sexes, while central obesity was only associated with overestimation of body size among girls. The results of this study suggest there is a high prevalence of inaccurate body size estimation and that inaccurate estimation is associated with obesity. Accurate estimation of own body size is important among obese schoolchildren because it may be the first step towards adopting healthy lifestyle behaviors.
A 500-kiloton airburst over Chelyabinsk and an enhanced hazard from small impactors
NASA Astrophysics Data System (ADS)
Brown, P. G.; Assink, J. D.; Astiz, L.; Blaauw, R.; Boslough, M. B.; Borovička, J.; Brachet, N.; Brown, D.; Campbell-Brown, M.; Ceranna, L.; Cooke, W.; de Groot-Hedlin, C.; Drob, D. P.; Edwards, W.; Evers, L. G.; Garces, M.; Gill, J.; Hedlin, M.; Kingery, A.; Laske, G.; Le Pichon, A.; Mialle, P.; Moser, D. E.; Saffer, A.; Silber, E.; Smets, P.; Spalding, R. E.; Spurný, P.; Tagliaferri, E.; Uren, D.; Weryk, R. J.; Whitaker, R.; Krzeminski, Z.
2013-11-01
Most large (over a kilometre in diameter) near-Earth asteroids are now known, but recognition that airbursts (or fireballs resulting from nuclear-weapon-sized detonations of meteoroids in the atmosphere) have the potential to do greater damage than previously thought has shifted an increasing portion of the residual impact risk (the risk of impact from an unknown object) to smaller objects. Above the threshold size of impactor at which the atmosphere absorbs sufficient energy to prevent a ground impact, most of the damage is thought to be caused by the airburst shock wave, but owing to lack of observations this is uncertain. Here we report an analysis of the damage from the airburst of an asteroid about 19 metres (17 to 20 metres) in diameter southeast of Chelyabinsk, Russia, on 15 February 2013, estimated to have an energy equivalent of approximately 500 (+/-100) kilotons of trinitrotoluene (TNT, where 1 kiloton of TNT = 4.185×1012 joules). We show that a widely referenced technique of estimating airburst damage does not reproduce the observations, and that the mathematical relations based on the effects of nuclear weapons--almost always used with this technique--overestimate blast damage. This suggests that earlier damage estimates near the threshold impactor size are too high. We performed a global survey of airbursts of a kiloton or more (including Chelyabinsk), and find that the number of impactors with diameters of tens of metres may be an order of magnitude higher than estimates based on other techniques. This suggests a non-equilibrium (if the population were in a long-term collisional steady state the size-frequency distribution would either follow a single power law or there must be a size-dependent bias in other surveys) in the near-Earth asteroid population for objects 10 to 50 metres in diameter, and shifts more of the residual impact risk to these sizes.
Titanium and advanced composite structures for a supersonic cruise arrow wing configuration
NASA Technical Reports Server (NTRS)
Turner, M. J.; Hoy, J. M.
1976-01-01
Structural design studies were made, based on current technology and on an estimate of technology to be available in the mid 1980's, to assess the relative merits of structural concepts and materials for an advanced arrow wing configuration cruising at Mach 2.7. Preliminary studies were made to insure compliance of the configuration with general design criteria, integrate the propulsion system with the airframe, and define an efficient structural arrangement. Material and concept selection, detailed structural analysis, structural design and airplane mass analysis were completed based on current technology. Based on estimated future technology, structural sizing for strength and a preliminary assessment of the flutter of a strength designed composite structure were completed. An advanced computerized structural design system was used, in conjunction with a relatively complex finite element model, for detailed analysis and sizing of structural members.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
ERIC Educational Resources Information Center
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
The Misdirection of Public Policy: Comparing and Combining Standardised Effect Sizes
ERIC Educational Resources Information Center
Simpson, Adrian
2017-01-01
Increased attention on "what works" in education has led to an emphasis on developing policy from evidence based on comparing and combining a particular statistical summary of intervention studies: the standardised effect size. It is assumed that this statistical summary provides an estimate of the educational impact of interventions and…
Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor
2016-11-01
The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.
NASA Technical Reports Server (NTRS)
Desai, Pooja; Hauser, Dan; Sutherlin, Steven
2017-01-01
NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.
Mars Propellant Liquefaction Modeling in Thermal Desktop
NASA Technical Reports Server (NTRS)
Desai, Pooja; Hauser, Dan; Sutherlin, Steven
2017-01-01
NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy
NASA Astrophysics Data System (ADS)
Wada, Ken; Hyodo, Toshio
2013-06-01
Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.
Small area estimation (SAE) model: Case study of poverty in West Java Province
NASA Astrophysics Data System (ADS)
Suhartini, Titin; Sadik, Kusman; Indahwati
2016-02-01
This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.
Usami, Satoshi
2017-03-01
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
Effective size of two feral domestic cat populations (Felis catus L): effect of the mating system.
Kaeuffer, R; Pontier, D; Devillard, S; Perrin, N
2004-02-01
A variety of behavioural traits have substantial effects on the gene dynamics and genetic structure of local populations. The mating system is a plastic trait that varies with environmental conditions in the domestic cat (Felis catus) allowing an intraspecific comparison of the impact of this feature on genetic characteristics of the population. To assess the potential effect of the heterogenity of males' contribution to the next generation on variance effective size, we applied the ecological approach of Nunney & Elam (1994) based upon a demographic and behavioural study, and the genetic 'temporal methods' of Waples (1989) and Berthier et al. (2002) using microsatellite markers. The two cat populations studied were nearly closed, similar in size and survival parameters, but differed in their mating system. Immigration appeared extremely restricted in both cases due to environmental and social constraints. As expected, the ratio of effective size to census number (Ne/N) was higher in the promiscuous cat population (harmonic mean = 42%) than in the polygynous one (33%), when Ne was calculated from the ecological method. Only the genetic results based on Waples' estimator were consistent with the ecological results, but failed to evidence an effect of the mating system. Results based on the estimation of Berthier et al. (2002) were extremely variable, with Ne sometimes exceeding census size. Such low reliability in the genetic results should retain attention for conservation purposes.
The effects of delay duration on visual working memory for orientation.
Shin, Hongsup; Zou, Qijia; Ma, Wei Ji
2017-12-01
We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.
Offshore Wind Plant Balance-of-Station Cost Drivers and Sensitivities (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, G.; Maples, B.; Meadows, B.
2012-09-01
With Balance of System (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on themore » new BOS model, an analysis to understand the non-turbine costs associated with offshore turbine sizes ranging from 3 MW to 6 MW and offshore wind plant sizes ranging from 100 MW to 1000 MW has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from US offshore wind plants.« less
Ellison, Aaron M.; Jackson, Scott
2015-01-01
Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008
Alter, S. Elizabeth; Newsome, Seth D.; Palumbi, Stephen R.
2012-01-01
Commercial whaling decimated many whale populations, including the eastern Pacific gray whale, but little is known about how population dynamics or ecology differed prior to these removals. Of particular interest is the possibility of a large population decline prior to whaling, as such a decline could explain the ∼5-fold difference between genetic estimates of prior abundance and estimates based on historical records. We analyzed genetic (mitochondrial control region) and isotopic information from modern and prehistoric gray whales using serial coalescent simulations and Bayesian skyline analyses to test for a pre-whaling decline and to examine prehistoric genetic diversity, population dynamics and ecology. Simulations demonstrate that significant genetic differences observed between ancient and modern samples could be caused by a large, recent population bottleneck, roughly concurrent with commercial whaling. Stable isotopes show minimal differences between modern and ancient gray whale foraging ecology. Using rejection-based Approximate Bayesian Computation, we estimate the size of the population bottleneck at its minimum abundance and the pre-bottleneck abundance. Our results agree with previous genetic studies suggesting the historical size of the eastern gray whale population was roughly three to five times its current size. PMID:22590499
Growth and mortality of larval sunfish in backwaters of the upper Mississippi River
Zigler, S.J.; Jennings, C.A.
1993-01-01
The authors estimated the growth and mortality of larval sunfish Lepomis spp. in backwater habitats of the upper Mississippi River with an otolith-based method and a length-based method. Fish were sampled with plankton nets at one station in Navigation Pools 8 and 14 in 1989 and at two stations in Pool 8 in 1990. For both methods, growth was modeled with an exponential equation, and instantaneous mortality was estimated by regressing the natural logarithm of fish catch for each 1-mm size-group against the estimated age of the group, which was derived from the growth equations. At two of the stations, the otolith-based method provided more precise estimates of sunfish growth than the length-based method. We were able to compare length-based and otolith-based estimates of sunfish mortality only at the two stations where we caught the largest numbers of sunfish. Estimates of mortality were similar for both methods in Pool 14, where catches were higher, but the length-based method gave significantly higher estimates in Pool 8, where the catches were lower. The otolith- based method required more laboratory analysis, but provided better estimates of the growth and mortality than the length-based method when catches were low. However, the length-based method was more cost- effective for estimating growth and mortality when catches were large.
Yamashita, Shinpei; Takigahira, Tomohiro; Takahashi, Kazuo H
2018-06-01
Accumulating evidence suggests that genotype of host insects influences the development of koinobiont endoparasitoids. Although there are many potential genetic variations that lead to the internal body environmental variations of host insects, association between the host genotype and the parasitoid development has not been examined in a genome-wide manner. In the present study, we used highly inbred whole genome sequenced strains of Drosophila melanogaster to associate single nucleotide polymorphisms (SNPs) of host flies with morphological traits of Asobara japonica, a larval-pupal parasitoid wasp that infected those hosts. We quantified the outline shape of the forewings of A. japonica with two major principal components (PC1 and PC2) calculated from Fourier coefficients obtained from elliptic Fourier analysis. We also quantified wing size and estimated wasp survival. We then examined the association between the PC scores, wing size and 1,798,561 SNPs and the association between the estimated wasp survival and 1,790,544 SNPs. As a result, we obtained 22, 24 and 14 SNPs for PC1, PC2 and wing size and four SNPs for the estimated survival with P values smaller than 10 -5 . Based on the location of the SNPs, 12, 17, 11 and five protein coding genes were identified as potential candidates for PC1, PC2, wing size and the estimated survival, respectively. Based on the function of the candidate genes, it is suggested that the host genetic variation associated with the cell growth and morphogenesis may influence the wasp's morphogenetic variation.
Is overestimation of body size associated with neuropsychological weaknesses in anorexia nervosa?
Øverås, Maria; Kapstad, Hilde; Brunborg, Cathrine; Landrø, Nils Inge; Rø, Øyvind
2017-03-01
Recent research indicates some evidence of neuropsychological weaknesses in visuospatial memory, central coherence and set-shifting in adults with anorexia nervosa (AN). The growing interest in neuropsychological functioning of patients with AN is based upon the assumption that neuropsychological weaknesses contribute to the clinical features of the illness. However, due to a paucity of research on the connection between neuropsychological difficulties and the clinical features of AN, this link remains hypothetical. The main objective of this study was to explore the association between specific areas of neuropsychological functioning and body size estimation in patients with AN and healthy controls. The sample consisted of 36 women diagnosed with AN and 34 healthy female controls. Participants were administered the continuous visual memory test and the recall trials of Rey Complex Figure Test to assess visual memory. Central coherence was assessed using the copy trial of Rey Complex Figure Test, and the Wisconsin Card Sorting Test was used to assess set-shifting. Body size estimation was assessed with a computerized morphing programme. The analyses showed no significant correlations between any of the neuropsychological measures and body size estimation. The results suggest that there is no association between these areas of neuropsychological difficulties and body size estimation among patients with AN. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
78 FR 54722 - Reports, Forms and Record Keeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-05
... submission requesting confidential treatment. This estimate will vary based on the size of the submission, with smaller and voluntary submissions taking considerably less time to prepare. The agency based this... approximately 460 requests for confidential treatment annually. This figure is based on the average number of...
Estimation of population size using open capture-recapture models
McDonald, T.L.; Amstrup, Steven C.
2001-01-01
One of the most important needs for wildlife managers is an accurate estimate of population size. Yet, for many species, including most marine species and large mammals, accurate and precise estimation of numbers is one of the most difficult of all research challenges. Open-population capture-recapture models have proven useful in many situations to estimate survival probabilities but typically have not been used to estimate population size. We show that open-population models can be used to estimate population size by developing a Horvitz-Thompson-type estimate of population size and an estimator of its variance. Our population size estimate keys on the probability of capture at each trap occasion and therefore is quite general and can be made a function of external covariates measured during the study. Here we define the estimator and investigate its bias, variance, and variance estimator via computer simulation. Computer simulations make extensive use of real data taken from a study of polar bears (Ursus maritimus) in the Beaufort Sea. The population size estimator is shown to be useful because it was negligibly biased in all situations studied. The variance estimator is shown to be useful in all situations, but caution is warranted in cases of extreme capture heterogeneity.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Estimating the breeding population of long-billed curlew in the United States
Stanley, T.R.; Skagen, S.K.
2007-01-01
Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.
Computational methods for a three-dimensional model of the petroleum-discovery process
Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.
1980-01-01
A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.
Graphic comparison of reserve-growth models for conventional oil and accumulation
Klett, T.R.
2003-01-01
The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older and newer data. The reserve-growth model used in the 1995 USGS National Assessment and the model currently used in the NOGA project provide forecast functions that yield similar estimates of potential additions to reserves. Both models are based on the Oil and Gas Integrated Field File from the Energy Information Administration (EIA), but different vintages of data (from 1977 through 1991 and 1977 through 1996, respectively). The model based on newer data can be used in place of the previous model, providing similar estimates of potential additions to reserves. Fore-cast functions for oil fields vary little from those for gas fields in these models; therefore, a single function may be used for both oil and gas fields, like that used in the USGS World Petroleum Assessment 2000. Forecast functions based on the field-level reserve growth model derived from the NRG Associates databases (from 1982 through 1998) differ from those derived from EIA databases (from 1977 through 1996). However, the difference may not be enough to preclude the use of the forecast functions derived from NRG data in place of the forecast functions derived from EIA data. Should the model derived from NRG data be used, separate forecast functions for oil fields and gas fields must be employed. The forecast function for oil fields from the model derived from NRG data varies significantly from that for gas fields, and a single function for both oil and gas fields may not be appropriate.
NASA Astrophysics Data System (ADS)
Stiefenhofer, Johann; Thurston, Malcolm L.; Bush, David E.
2018-04-01
Microdiamonds offer several advantages as a resource estimation tool, such as access to deeper parts of a deposit which may be beyond the reach of large diameter drilling (LDD) techniques, the recovery of the total diamond content in the kimberlite, and a cost benefit due to the cheaper treatment cost compared to large diameter samples. In this paper we take the first step towards local estimation by showing that micro-diamond samples can be treated as a regionalised variable suitable for use in geostatistical applications and we show examples of such output. Examples of microdiamond variograms are presented, the variance-support relationship for microdiamonds is demonstrated and consistency of the diamond size frequency distribution (SFD) is shown with the aid of real datasets. The focus therefore is on why local microdiamond estimation should be possible, not how to generate such estimates. Data from our case studies and examples demonstrate a positive correlation between micro- and macrodiamond sample grades as well as block estimates. This relationship can be demonstrated repeatedly across multiple mining operations. The smaller sample support size for microdiamond samples is a key difference between micro- and macrodiamond estimates and this aspect must be taken into account during the estimation process. We discuss three methods which can be used to validate or reconcile the estimates against macrodiamond data, either as estimates or in the form of production grades: (i) reconcilliation using production data, (ii) by comparing LDD-based grade estimates against microdiamond-based estimates and (iii) using simulation techniques.
Camera traps and activity signs to estimate wild boar density and derive abundance indices.
Massei, Giovanna; Coats, Julia; Lambert, Mark Simon; Pietravalle, Stephane; Gill, Robin; Cowan, Dave
2018-04-01
Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km 2 . Increasing the density of camera traps above nine per km 2 did not increase the precision of the estimate of wild boar density. PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km 2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.
Sun, Y.; Goldberg, D.; Collett, T.; Hunter, R.
2011-01-01
A dielectric logging tool, electromagnetic propagation tool (EPT), was deployed in 2007 in the BPXA-DOE-USGS Mount Elbert Gas Hydrate Stratigraphic Test Well (Mount Elbert Well), North Slope, Alaska. The measured dielectric properties in the Mount Elbert well, combined with density log measurements, result in a vertical high-resolution (cm-scale) estimate of gas hydrate saturation. Two hydrate-bearing sand reservoirs about 20 m thick were identified using the EPT log and exhibited gas-hydrate saturation estimates ranging from 45% to 85%. In hydrate-bearing zones where variation of hole size and oil-based mud invasion are minimal, EPT-based gas hydrate saturation estimates on average agree well with lower vertical resolution estimates from the nuclear magnetic resonance logs; however, saturation and porosity estimates based on EPT logs are not reliable in intervals with substantial variations in borehole diameter and oil-based invasion.EPT log interpretation reveals many thin-bedded layers at various depths, both above and below the thick continuous hydrate occurrences, which range from 30-cm to about 1-m thick. Such thin layers are not indicated in other well logs, or from the visual observation of core, with the exception of the image log recorded by the oil-base microimager. We also observe that EPT dielectric measurements can be used to accurately detect fine-scale changes in lithology and pore fluid properties of hydrate-bearing sediments where variation of hole size is minimal. EPT measurements may thus provide high-resolution in-situ hydrate saturation estimates for comparison and calibration with laboratory analysis. ?? 2010 Elsevier Ltd.
Enterprise size and return to work after stroke.
Hannerz, Harald; Ferm, Linnea; Poulsen, Otto M; Pedersen, Betina Holbæk; Andersen, Lars L
2012-12-01
It has been hypothesised that return to work rates among sick-listed workers increases with enterprise size. The aim of the present study was to estimate the effect of enterprise size on the odds of returning to work among previously employed stroke patients in Denmark, 2000-2006. We used a prospective design with a 2 year follow-up period. The study population consisted of 13,178 stroke patients divided into four enterprise sizes categories, according to the place of their employment prior to the stroke: micro (1-9 employees), small (10-49 employees), medium (50-249 employees) and large (>250 employees). The analysis was based on nationwide data on enterprise size from Statistics Denmark merged with data from the Danish occupational hospitalisation register. We found a statistically significant association (p = 0.034); each increase in enterprise size category was followed by an increase in the estimated odds of returning to work. The chances of returning to work after stroke increases as the size of enterprise increases. Preventive efforts and research aimed at finding ways of mitigating the effect are warranted.
Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).
U.S. Balance-of-Station Cost Drivers and Sensitivities (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maples, B.
2012-10-01
With balance-of-system (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on the new BOSmore » model, an analysis to understand the non‐turbine costs has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from U.S. offshore wind plants.« less
Estimation of size of cord blood inventory based on high-resolution typing of HLAs.
Song, E Y; Huh, J Y; Kim, S Y; Kim, T G; Oh, S; Yoon, J H; Roh, E Y; Park, M H; Kang, M S; Shin, S
2014-07-01
Methods for estimating the cord blood (CB) inventory size required vary according to the ethnic diversity of the HLA, degree of HLA matching and HLA-typing resolution. We estimated the CB inventory size required using 7190 stored CB units (CBU) and 2450 patients who were awaiting or underwent allogeneic hematopoietic stem cell transplantation. With high-resolution typing of HLA-A, B and DRB1, 94.6% of Korean patients could find CBUs in 100 000 CBUs with a 5/6 match, and 95.7% could find CBUs in 5000 CBUs with a 4/6 match. With low-resolution typing of HLA-A and B and high-resolution typing of leukocyte antigen-DRB1, 95% of patients could find CBUs in 50 000 CBUs with a 5/6 match, and 96.7% could find CBUs in 3000 CBUs with a 4/6 match. With additional high-resolution typing for HLA-A and B, which could improve transplantation outcome, the size of the CB inventory would need to increase twofold for Koreans.
Estimating the number of sex workers in South Africa: rapid population size estimation.
Konstant, Tracey L; Rangasami, Jerushah; Stacey, Maria J; Stewart, Michelle L; Nogoduka, Coceka
2015-02-01
Although recognized as a vulnerable population, there is no national population size estimate for sex workers in South Africa. A rapid sex worker enumeration exercise was undertaken in twelve locations across the country based on principles of participatory mapping and Wisdom of the Crowd. Sites with a range of characteristics were selected, focusing on level of urbanisation, trucking, mining and borders. At each site, sex worker focus groups mapped local hotspots. Interviews with sex workers at identified hotspots were used to estimate the numbers and genders of sex workers working in each. Estimates provided in the literature were combined with enumeration exercise results to define assumptions that could be applied to a national extrapolation. A working estimate was reached of between 131,000 and 182,000 sex worker in South Africa, or between 0.76 and 1 % of the adult female population. The success of the exercise depended on integral involvement of sex worker peer educators and strong ethical considerations.
Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S
2015-01-01
Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
Towner, Alison V; Wcisel, Michelle A; Reisinger, Ryan R; Edwards, David; Jewell, Oliver J D
2013-01-01
South Africa is reputed to host the world's largest remaining population of white sharks, yet no studies have accurately determined a population estimate based on mark-recapture of live individuals. We used dorsal fin photographs (fin IDs) to identify white sharks in Gansbaai, South Africa, from January 2007-December 2011. We used the computer programme DARWIN to catalogue and match fin IDs of individuals; this is the first study to successfully use the software for white shark identification. The programme performed well despite a number of individual fins showing drastic changes in dorsal fin shape over time. Of 1682 fin IDs used, 532 unique individuals were identified. We estimated population size using the open-population POPAN parameterisation in Program MARK, which estimated the superpopulation size at 908 (95% confidence interval 808-1008). This estimated population size is considerably larger than those described at other aggregation areas of the species and is comparable to a previous South African population estimate conducted 16 years prior. Our assessment suggests the species has not made a marked recovery since being nationally protected in 1991. As such, additional international protection may prove vital for the long-term conservation of this threatened species.
An extension of the Saltykov method to quantify 3D grain size distributions in mylonites
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio
2016-12-01
The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.
Utilizing the Vertical Variability of Precipitation to Improve Radar QPE
NASA Technical Reports Server (NTRS)
Gatlin, Patrick N.; Petersen, Walter A.
2016-01-01
Characteristics of the melting layer and raindrop size distribution can be exploited to further improve radar quantitative precipitation estimation (QPE). Using dual-polarimetric radar and disdrometers, we found that the characteristic size of raindrops reaching the ground in stratiform precipitation often varies linearly with the depth of the melting layer. As a result, a radar rainfall estimator was formulated using D(sub m) that can be employed by polarimetric as well as dual-frequency radars (e.g., space-based radars such as the GPM DPR), to lower the bias and uncertainty of conventional single radar parameter rainfall estimates by as much as 20%. Polarimetric radar also suffers from issues associated with sampling the vertical distribution of precipitation. Hence, we characterized the vertical profile of polarimetric parameters (VP3)-a radar manifestation of the evolving size and shape of hydrometeors as they fall to the ground-on dual-polarimetric rainfall estimation. The VP3 revealed that the profile of ZDR in stratiform rainfall can bias dual-polarimetric rainfall estimators by as much as 50%, even after correction for the vertical profile of reflectivity (VPR). The VP3 correction technique that we developed can improve operational dual-polarimetric rainfall estimates by 13% beyond that offered by a VPR correction alone.
Towner, Alison V.; Wcisel, Michelle A.; Reisinger, Ryan R.; Edwards, David; Jewell, Oliver J. D.
2013-01-01
South Africa is reputed to host the world’s largest remaining population of white sharks, yet no studies have accurately determined a population estimate based on mark-recapture of live individuals. We used dorsal fin photographs (fin IDs) to identify white sharks in Gansbaai, South Africa, from January 2007 – December 2011. We used the computer programme DARWIN to catalogue and match fin IDs of individuals; this is the first study to successfully use the software for white shark identification. The programme performed well despite a number of individual fins showing drastic changes in dorsal fin shape over time. Of 1682 fin IDs used, 532 unique individuals were identified. We estimated population size using the open-population POPAN parameterisation in Program MARK, which estimated the superpopulation size at 908 (95% confidence interval 808–1008). This estimated population size is considerably larger than those described at other aggregation areas of the species and is comparable to a previous South African population estimate conducted 16 years prior. Our assessment suggests the species has not made a marked recovery since being nationally protected in 1991. As such, additional international protection may prove vital for the long-term conservation of this threatened species. PMID:23776600
Capello, Katia; Bortolotti, Laura; Lanari, Manuela; Baioni, Elisa; Mutinelli, Franco; Vascellari, Marta
2015-01-01
The knowledge of the size and demographic structure of animal populations is a necessary prerequisite for any population-based epidemiological study, especially to ascertain and interpret prevalence data, to implement surveillance plans in controlling zoonotic diseases and, moreover, to provide accurate estimates of tumours incidence data obtained by population-based registries. The main purpose of this study was to provide an accurate estimate of the size and structure of the canine population in Veneto region (north-eastern Italy), using the Lincoln-Petersen version of the capture-recapture methodology. The Regional Canine Demographic Registry (BAC) and a sample survey of households of Veneto Region were the capture and recapture sources, respectively. The secondary purpose was to estimate the size and structure of the feline population in the same region, using the same survey applied for dog population. A sample of 2465 randomly selected households was drawn and submitted to a questionnaire using the CATI technique, in order to obtain information about the ownership of dogs and cats. If the dog was declared to be identified, owner's information was used to recapture the dog in the BAC. The study was conducted in Veneto Region during 2011, when the dog population recorded in the BAC was 605,537. Overall, 616 households declared to possess at least one dog (25%), with a total of 805 dogs and an average per household of 1.3. The capture-recapture analysis showed that 574 dogs (71.3%, 95% CI: 68.04-74.40%) had been recaptured in both sources, providing a dog population estimate of 849,229 (95% CI: 814,747-889,394), 40% higher than that registered in the BAC. Concerning cats, 455 of 2465 (18%, 95% CI: 17-20%) households declared to possess at least one cat at the time of the telephone interview, with a total of 816 cats. The mean number of cats per household was equal to 1.8, providing an estimate of the cat population in Veneto region equal to 663,433 (95% CI: 626,585-737,159). The estimate of the size and structure of owned canine and feline populations in Veneto region provide useful data to perform epidemiological studies and monitoring plans in this area. Copyright © 2014 Elsevier B.V. All rights reserved.
Rossi, Carla
2013-06-01
The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.
Bussières, Philippe
2014-05-12
Because it is difficult to obtain transverse views of the plant phloem sieve plate pores, which are short tubes, to estimate their number and diameters, a method based on longitudinal views is proposed. This method uses recent methods to estimate the number and the sizes of approximately circular objects from their images, given by slices perpendicular to the objects. Moreover, because such longitudinal views are obtained from slices that are rather close to the plate centres whereas the pore size may vary with the pore distance from the plate edge, a sieve plate reconstruction model was developed and incorporated in the method to consider this bias. The method was successfully tested with published longitudinal views of phloem of Soybean and an exceptional entire transverse view from the same tissue. The method was also validated with simulated slices in two sieve plates from Cucurbita and Phaseolus. This method will likely be useful to estimate and to model the hydraulic conductivity and the architecture of the plant phloem, and it could have applications for other materials with approximately cylindrical structures.
3D brain tumor localization and parameter estimation using thermographic approach on GPU.
Bousselham, Abdelmajid; Bouattane, Omar; Youssfi, Mohamed; Raihani, Abdelhadi
2018-01-01
The aim of this paper is to present a GPU parallel algorithm for brain tumor detection to estimate its size and location from surface temperature distribution obtained by thermography. The normal brain tissue is modeled as a rectangular cube including spherical tumor. The temperature distribution is calculated using forward three dimensional Pennes bioheat transfer equation, it's solved using massively parallel Finite Difference Method (FDM) and implemented on Graphics Processing Unit (GPU). Genetic Algorithm (GA) was used to solve the inverse problem and estimate the tumor size and location by minimizing an objective function involving measured temperature on the surface to those obtained by numerical simulation. The parallel implementation of Finite Difference Method reduces significantly the time of bioheat transfer and greatly accelerates the inverse identification of brain tumor thermophysical and geometrical properties. Experimental results show significant gains in the computational speed on GPU and achieve a speedup of around 41 compared to the CPU. The analysis performance of the estimation based on tumor size inside brain tissue also presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling misidentification errors that result from use of genetic tags in capture-recapture studies
Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.
2011-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.
A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles
NASA Technical Reports Server (NTRS)
Kinzie, Kevin W.; Schein, David B.
2004-01-01
A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.
Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.
Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L
2008-04-01
The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.
Can high resolution topographic surveys provide reliable grain size estimates?
NASA Astrophysics Data System (ADS)
Pearson, Eleanor; Smith, Mark; Klaar, Megan; Brown, Lee
2017-04-01
High resolution topographic surveys contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-grid scale topographic variability (or 'surface roughness') to particle grain size by deriving empirical relationships between the two. Such relationships would permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing data to drive distributed hydraulic models and revolutionising monitoring of river restoration projects. However, comparison of previous roughness-grain-size relationships shows substantial variability between field sites and do not take into account differences in patch-scale facies. This study explains this variability by identifying the factors that influence roughness-grain-size relationships. Using 275 laboratory and field-based Structure-from-Motion (SfM) surveys, we investigate the influence of: inherent survey error; irregularity of natural gravels; particle shape; grain packing structure; sorting; and form roughness on roughness-grain-size relationships. A suite of empirical relationships is presented in the form of a decision tree which improves estimations of grain size. Results indicate that the survey technique itself is capable of providing accurate grain size estimates. By accounting for differences in patch facies, R2 was seen to improve from 0.769 to R2 > 0.9 for certain facies. However, at present, the method is unsuitable for poorly sorted gravel patches. In future, a combination of a surface roughness proxy with photosieving techniques using SfM-derived orthophotos may offer improvements on using either technique individually.
NASA Astrophysics Data System (ADS)
Wang, Xubo; Li, Qi; Yu, Hong; Kong, Lingfeng
2016-12-01
Four successive mass selection lines of the Pacific oyster, Crassostrea gigas, selected for faster growth in breeding programs in China were examined at ten polymorphic microsatellite loci to assess the level of allelic diversity and estimate the effective population size. These data were compared with those of their base population. The results showed that the genetic variation of the four generations were maintained at high levels with an average allelic richness of 18.8-20.6, and a mean expected heterozygosity of 0.902-0.921. They were not reduced compared with those of their base population. Estimated effective population sizes based on temporal variances in microsatellite frequencies were smaller to that of sex ratio-corrected broodstock count estimates. Using a relatively large number of broodstock and keeping an equal sex ratio in the broodstock each generation may have contributed to retaining the original genetic diversity and maintaining relatively large effective population size. The results obtained in this study showed that the genetic variation was not affected greatly by mass selection progress and high genetic variation still existed in the mass selection lines, suggesting that there is still potential for increasing the gains in future generations of C. gigas. The present study provided important information for future genetic improvement by selective breeding, and for the design of suitable management guidelines for genetic breeding of C. gigas.
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew
2016-03-01
It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.
The Distributional Impact of In-Kind Public Benefits in European Countries
ERIC Educational Resources Information Center
Paulus, Alari; Sutherland, Holly; Tsakloglou, Panos
2010-01-01
International comparisons of inequality based on measures of disposable income may not be valid if the size and incidence of publicly provided in-kind benefits differ across the countries considered. The benefits that are financed by taxation in one country may need to be purchased out of disposable income in another. We estimate the size and…
NASA Astrophysics Data System (ADS)
Adirosi, E.; Baldini, L.; Roberto, N.; Gatlin, P.; Tokay, A.
2016-03-01
A measurement scheme aimed at investigating precipitation properties based on collocated disdrometer and profiling instruments is used in many experimental campaigns. Raindrop size distribution (RSD) estimated by disdrometer is referred to the ground level; the collocated profiling instrument is supposed to provide complementary estimation at different heights of the precipitation column above the instruments. As part of the Special Observation Period 1 of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, conducted between 5 September and 6 November 2012, a K-band vertically pointing micro rain radar (MRR) and a 2D video disdrometer (2DVD) were installed close to each other at a site in the historic center of Rome (Italy). The raindrop size distributions collected by 2D video disdrometer are considered to be fairly accurate within the typical sizes of drops. Vertical profiles of raindrop sizes up to 1085 m are estimated from the Doppler spectra measured by the micro rain radar with a height resolution of 35 m. Several issues related to vertical winds, attenuation correction, Doppler spectra aliasing, and range-Doppler ambiguity limit the performance of MRR in heavy precipitation or in convection, conditions that frequently occur in late summer or in autumn in Mediterranean regions. In this paper, MRR Doppler spectra are reprocessed, exploiting the 2DVD measurements at ground to estimate the effects of vertical winds at 105 m (the most reliable MRR lower height), in order to provide a better estimation of vertical profiles of raindrop size distribution from MRR spectra. Results show that the reprocessing procedure leads to a better agreement between the reflectivity computed at 105 m from the reprocessed MRR spectra and that obtained from the 2DVD data. Finally, vertical profiles of MRR-estimated RSDs and their relevant moments (namely median volume diameter and reflectivity) are presented and discussed in order to investigate the microstructure of rain both in stratiform and convective conditions.
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S
2013-06-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.
Improved Sizing of Impact Damage in Composites Based on Thermographic Response
NASA Technical Reports Server (NTRS)
Winfree, William P.; Howell Patricia A.; Leckey, Cara A.; Rogge, Matthew D.
2013-01-01
Impact damage in thin carbon fiber reinforced polymer composites often results in a relatively small region of damage at the front surface, with increasing damage near the back surface. Conventional methods for reducing the pulsed thermographic responses of the composite tend to underestimate the size of the back surface damage, since the smaller near surface damage gives the largest thermographic indication. A method is presented for reducing the thermographic data to produce an estimated size for the impact damage that is much closer to the size of the damage estimated from other NDE techniques such as microfocus x-ray computed tomography and pulse echo ultrasonics. Examples of the application of the technique to experimental data acquired on specimens with impact damage are presented. The method is also applied to the results of thermographic simulations to investigate the limitations of the technique.
Sizing and Lifecycle Cost Analysis of an Ares V Composite Interstage
NASA Technical Reports Server (NTRS)
Mann, Troy; Smeltzer, Stan; Grenoble, Ray; Mason, Brian; Rosario, Sev; Fairbairn, Bob
2012-01-01
The Interstage Element of the Ares V launch vehicle was sized using a commercially available structural sizing software tool. Two different concepts were considered, a metallic design and a composite design. Both concepts were sized using similar levels of analysis fidelity and included the influence of design details on each concept. Additionally, the impact of the different manufacturing techniques and failure mechanisms for composite and metallic construction were considered. Significant details were included in analysis models of each concept, including penetrations for human access, joint connections, as well as secondary loading effects. The designs and results of the analysis were used to determine lifecycle cost estimates for the two Interstage designs. Lifecycle cost estimates were based on industry provided cost data for similar launch vehicle components. The results indicated that significant mass as well as cost savings are attainable for the chosen composite concept as compared with a metallic option.
Using e-mail recruitment and an online questionnaire to establish effect size: A worked example.
Kirkby, Helen M; Wilson, Sue; Calvert, Melanie; Draper, Heather
2011-06-09
Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate.An online questionnaire was developed using the free online tool 'Survey Monkey©'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research.Analyses comprised simple descriptive statistics. The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol.
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Estimating snow leopard population abundance using photography and capture-recapture techniques
Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.
2006-01-01
Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
Richter, Jacob T.; Sloss, Brian L.; Isermann, Daniel A.
2016-01-01
Previous research has generally ignored the potential effects of spawning habitat availability and quality on recruitment of Walleye Sander vitreus, largely because information on spawning habitat is lacking for many lakes. Furthermore, traditional transect-based methods used to describe habitat are time and labor intensive. Our objectives were to determine if side-scan sonar could be used to accurately classify Walleye spawning habitat in the nearshore littoral zone and provide lakewide estimates of spawning habitat availability similar to estimates obtained from a transect–quadrat-based method. Based on assessments completed on 16 northern Wisconsin lakes, interpretation of side-scan sonar images resulted in correct identification of substrate size-class for 93% (177 of 191) of selected locations and all incorrect classifications were within ± 1 class of the correct substrate size-class. Gravel, cobble, and rubble substrates were incorrectly identified from side-scan images in only two instances (1% misclassification), suggesting that side-scan sonar can be used to accurately identify preferred Walleye spawning substrates. Additionally, we detected no significant differences in estimates of lakewide littoral zone substrate compositions estimated using side-scan sonar and a traditional transect–quadrat-based method. Our results indicate that side-scan sonar offers a practical, accurate, and efficient technique for assessing substrate composition and quantifying potential Walleye spawning habitat in the nearshore littoral zone of north temperate lakes.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
A universal approximation to grain size from images of non-cohesive sediment
Buscombe, D.; Rubin, D.M.; Warrick, J.A.
2010-01-01
The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a “universal approximation” because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.
A universal approximation of grain size from images of noncohesive sediment
NASA Astrophysics Data System (ADS)
Buscombe, D.; Rubin, D. M.; Warrick, J. A.
2010-06-01
The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a "universal approximation" because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.
Spatial and temporal variation of body size among early Homo.
Will, Manuel; Stock, Jay T
2015-05-01
The estimation of body size among the earliest members of the genus Homo (2.4-1.5Myr [millions of years ago]) is central to interpretations of their biology. It is widely accepted that Homo ergaster possessed increased body size compared with Homo habilis and Homo rudolfensis, and that this may have been a factor involved with the dispersal of Homo out of Africa. The study of taxonomic differences in body size, however, is problematic. Postcranial remains are rarely associated with craniodental fossils, and taxonomic attributions frequently rest upon the size of skeletal elements. Previous body size estimates have been based upon well-preserved specimens with a more reliable species assessment. Since these samples are small (n < 5) and disparate in space and time, little is known about geographical and chronological variation in body size within early Homo. We investigate temporal and spatial variation in body size among fossils of early Homo using a 'taxon-free' approach, considering evidence for size variation from isolated and fragmentary postcranial remains (n = 39). To render the size of disparate fossil elements comparable, we derived new regression equations for common parameters of body size from a globally representative sample of hunter-gatherers and applied them to available postcranial measurements from the fossils. The results demonstrate chronological and spatial variation but no simple temporal or geographical trends for the evolution of body size among early Homo. Pronounced body size increases within Africa take place only after hominin populations were established at Dmanisi, suggesting that migrations into Eurasia were not contingent on larger body sizes. The primary evidence for these marked changes among early Homo is based upon material from Koobi Fora after 1.7Myr, indicating regional size variation. The significant body size differences between specimens from Koobi Fora and Olduvai support the cranial evidence for at least two co-existing morphotypes in the Early Pleistocene of eastern Africa. Copyright © 2015 Elsevier Ltd. All rights reserved.
Breast surface estimation for radar-based breast imaging systems.
Williams, Trevor C; Sill, Jeff M; Fear, Elise C
2008-06-01
Radar-based microwave breast-imaging techniques typically require the antennas to be placed at a certain distance from or on the breast surface. This requires prior knowledge of the breast location, shape, and size. The method proposed in this paper for obtaining this information is based on a modified tissue sensing adaptive radar algorithm. First, a breast surface detection scan is performed. Data from this scan are used to localize the breast by creating an estimate of the breast surface. If required, the antennas may then be placed at specified distances from the breast surface for a second tumor-sensing scan. This paper introduces the breast surface estimation and antenna placement algorithms. Surface estimation and antenna placement results are demonstrated on three-dimensional breast models derived from magnetic resonance images.
Cornelissen, Katri K; Cornelissen, Piers L; Hancock, Peter J B; Tovée, Martin J
2016-05-01
A core feature of anorexia nervosa (AN) is an over-estimation of body size. Women with AN have a different pattern of eye-movements when judging bodies, but it is unclear whether this is specific to their diagnosis or whether it is found in anyone over-estimating body size. To address this question, we compared the eye movement patterns from three participant groups while they carried out a body size estimation task: (i) 20 women with recovering/recovered anorexia (rAN) who had concerns about body shape and weight and who over-estimated body size, (ii) 20 healthy controls who had normative levels of concern about body shape and who estimated body size accurately (iii) 20 healthy controls who had normative levels of concern about body shape but who did over-estimate body size. Comparisons between the three groups showed that: (i) accurate body size estimators tended to look more in the waist region, and this was independent of clinical diagnosis; (ii) there is a pattern of looking at images of bodies, particularly viewing the upper parts of the torso and face, which is specific to participants with rAN but which is independent of accuracy in body size estimation. Since the over-estimating controls did not share the same body image concerns that women with rAN report, their over-estimation cannot be explained by attitudinal concerns about body shape and weight. These results suggest that a distributed fixation pattern is associated with over-estimation of body size and should be addressed in treatment programs. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:507-518). © 2016 The Authors. International Journal of Eating Disorders published by Wiley Periodicals, Inc.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori
2017-10-01
In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications, resulted in substantial inflation of the type 1 error rate. We recommend the use of Mancl and DeRouen's estimator in MMRM analysis if the number of subjects completing is ( n + 5) or less, where n is the number of planned visits. Otherwise, the use of Kenward and Roger's method with UN structure should be the best way.
Udevitz, Mark S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.
2002-01-01
Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.
Udevitz, Mark S.
2014-01-01
Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.
A profile of wood use in nonresidential building construction
H. N. Spelter; R. G. Anderson
This report presents estimates of the amounts of lumber, glued-laminated lumber, trusses, plywood, particleboard, hardboard, and wood shingles used in new nonresidential building construction in the United States. Use of wood products is shown for several building types, project sizes, and building components. The estimates are based on a survey of 489 projects under...
Geometric k-nearest neighbor estimation of entropy and mutual information
NASA Astrophysics Data System (ADS)
Lord, Warren M.; Sun, Jie; Bollt, Erik M.
2018-03-01
Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.
Sevelius, Jae M.
2017-01-01
Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask about transgender identity may account for residual heterogeneity in our models. Public health implications. Under- or nonrepresentation of transgender individuals in population surveys is a barrier to understanding social determinants and health disparities faced by this population. We recommend using standardized questions to identify respondents with transgender and nonbinary gender identities, which will allow a more accurate population size estimate. PMID:28075632
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Moreira, M. A.
1983-01-01
Using digitally processed MSS/LANDSAT data as auxiliary variable, a methodology to estimate wheat (Triticum aestivum L) area by means of sampling techniques was developed. To perform this research, aerial photographs covering 720 sq km in Cruz Alta test site at the NW of Rio Grande do Sul State, were visually analyzed. LANDSAT digital data were analyzed using non-supervised and supervised classification algorithms; as post-processing the classification was submitted to spatial filtering. To estimate wheat area, the regression estimation method was applied and different sample sizes and various sampling units (10, 20, 30, 40 and 60 sq km) were tested. Based on the four decision criteria established for this research, it was concluded that: (1) as the size of sampling units decreased the percentage of sampled area required to obtain similar estimation performance also decreased; (2) the lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation was 90% using 10 sq km s the sampling unit; and (3) wheat area estimation by direct expansion (using only aerial photographs) was less precise and accurate when compared to those obtained by means of regression estimation.
Worley, A C; Barrett, S C
2000-10-01
Trade-offs between flower size and number seem likely to influence the evolution of floral display and are an important assumption of several theoretical models. We assessed floral trade-offs by imposing two generations of selection on flower size and number in a greenhouse population of bee-pollinated Eichhornia paniculata. We established a control line and two replicate selection lines of 100 plants each for large flowers (S+), small flowers (S-), and many flowers per inflorescence (N+). We compared realized heritabilities and genetic correlations with estimates based on restricted-maximum-likelihood (REML) analysis of pedigrees. Responses to selection confirmed REML heritability estimates (flower size, h2 = 0.48; daily flower number, h2 = 0.10; total flower number, h2 = 0.23). Differences in nectar, pollen, and ovule production between S+ and S- lines supported an overall divergence in investment per flower. Both realized and REML estimates of the genetic correlation between daily and total flower number were r = 1.0. However, correlated responses to selection were inconsistent in their support of a trade-off. In both S- lines, correlated increases in flower number indicated a genetic correlation of r = -0.6 between flower size and number. In contrast, correlated responses in N+ and S+ lines were not significant, although flower size decreased in one N+ line. In addition, REML estimates of genetic correlations between flower size and number were positive, and did not differ from zero when variation in leaf area and age at first flowering were taken into account. These results likely reflect the combined effects of variation in genes controlling the resources available for flowering and genes with opposing effects on flower size and number. Our results suggest that the short-term evolution of floral display is not necessarily constrained by trade-offs between flower size and number, as is often assumed.
SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, R; Block, A; Harkenrider, M
2015-06-15
Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less
Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations
NASA Astrophysics Data System (ADS)
Charalambous, C.; Pike, W. T.; Golombek, M.
2017-12-01
We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides within the transported fraction on Mars can be estimated. The parameters of the model thus allow for a better understanding of the regolith's history which has implications to the origin of sand on Mars. [1] Charalambous, PhD thesis, ICL, 2015 [2] Golombek et al., Space Science Reviews, 2016 [3] Kok et al., ROPP, 2012 [4] McGlynn et al., JGR, 2011 [5] Pike et al., GRL, 2011
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
Robust range estimation with a monocular camera for vision-based forward collision warning system.
Park, Ki-Yeong; Hwang, Sun-Young
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.
Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344
Reddy, Sheila M W; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M
2013-06-01
Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or "plate-sized" fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen's revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (-3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen's revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations.
What is the effect of area size when using local area practice style as an instrument?
Brooks, John M; Tang, Yuexin; Chapman, Cole G; Cook, Elizabeth A; Chrischilles, Elizabeth A
2013-08-01
Discuss the tradeoffs inherent in choosing a local area size when using a measure of local area practice style as an instrument in instrumental variable estimation when assessing treatment effectiveness. Assess the effectiveness of angiotensin converting-enzyme inhibitors and angiotensin receptor blockers on survival after acute myocardial infarction for Medicare beneficiaries using practice style instruments based on different-sized local areas around patients. We contrasted treatment effect estimates using different local area sizes in terms of the strength of the relationship between local area practice styles and individual patient treatment choices; and indirect assessments of the assumption violations. Using smaller local areas to measure practice styles exploits more treatment variation and results in smaller standard errors. However, if treatment effects are heterogeneous, the use of smaller local areas may increase the risk that local practice style measures are dominated by differences in average treatment effectiveness across areas and bias results toward greater effectiveness. Local area practice style measures can be useful instruments in instrumental variable analysis, but the use of smaller local area sizes to generate greater treatment variation may result in treatment effect estimates that are biased toward higher effectiveness. Assessment of whether ecological bias can be mitigated by changing local area size requires the use of outside data sources. Copyright © 2013 Elsevier Inc. All rights reserved.
Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method
ERIC Educational Resources Information Center
Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen
2017-01-01
The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
USDA-ARS?s Scientific Manuscript database
The objective of the study is to assess the accuracy of portion-size estimates and participant preferences using various presentations of digital images. Two observational feeding studies were conducted. In both, each participant selected and consumed foods for breakfast and lunch, buffet style, se...
First Nuclear DNA Amounts in more than 300 Angiosperms
ZONNEVELD, B. J. M.; LEITCH, I. J.; BENNETT, M. D.
2005-01-01
• Background and Aims Genome size (DNA C-value) data are key biodiversity characters of fundamental significance used in a wide variety of biological fields. Since 1976, Bennett and colleagues have made scattered published and unpublished genome size data more widely accessible by assembling them into user-friendly compilations. Initially these were published as hard copy lists, but since 1997 they have also been made available electronically (see the Plant DNA C-values database www.kew.org/cval/homepage.html). Nevertheless, at the Second Plant Genome Size Meeting in 2003, Bennett noted that as many as 1000 DNA C-value estimates were still unpublished and hence unavailable. Scientists were strongly encouraged to communicate such unpublished data. The present work combines the databasing experience of the Kew-based authors with the unpublished C-values produced by Zonneveld to make a large body of valuable genome size data available to the scientific community. • Methods C-values for angiosperm species, selected primarily for their horticultural interest, were estimated by flow cytometry using the fluorochrome propidium iodide. The data were compiled into a table whose form is similar to previously published lists of DNA amounts by Bennett and colleagues. • Key Results and Conclusions The present work contains C-values for 411 taxa including first values for 308 species not listed previously by Bennett and colleagues. Based on a recent estimate of the global published output of angiosperm DNA C-value data (i.e. 200 first C-value estimates per annum) the present work equals 1·5 years of average global published output; and constitutes over 12 % of the latest 5-year global target set by the Second Plant Genome Size Workshop (see www.kew.org/cval/workshopreport.html). Hopefully, the present example will encourage others to unveil further valuable data which otherwise may lie forever unpublished and unavailable for comparative analyses. PMID:15905300
Measurement of Average Aggregate Density by Sedimentation and Brownian Motion Analysis.
Cavicchi, Richard E; King, Jason; Ripple, Dean C
2018-05-01
The spatially averaged density of protein aggregates is an important parameter that can be used to relate size distributions measured by orthogonal methods, to characterize protein particles, and perhaps to estimate the amount of protein in aggregate form in a sample. We obtained a series of images of protein aggregates exhibiting Brownian diffusion while settling under the influence of gravity in a sealed capillary. The aggregates were formed by stir-stressing a monoclonal antibody (NISTmAb). Image processing yielded particle tracks, which were then examined to determine settling velocity and hydrodynamic diameter down to 1 μm based on mean square displacement analysis. Measurements on polystyrene calibration microspheres ranging in size from 1 to 5 μm showed that the mean square displacement diameter had improved accuracy over the diameter derived from imaged particle area, suggesting a future method for correcting size distributions based on imaging. Stokes' law was used to estimate the density of each particle. It was found that the aggregates were highly porous with density decreasing from 1.080 to 1.028 g/cm 3 as the size increased from 1.37 to 4.9 μm. Published by Elsevier Inc.
Spatially explicit dynamic N-mixture models
Zhao, Qing; Royle, Andy; Boomer, G. Scott
2017-01-01
Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui
2018-05-01
A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.
Automatic portion estimation and visual refinement in mobile dietary assessment
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2011-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198
Are rapid population estimates accurate? A field trial of two different assessment methods.
Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent
2006-09-01
Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Automatic portion estimation and visual refinement in mobile dietary assessment
NASA Astrophysics Data System (ADS)
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2010-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.
Understanding Past Population Dynamics: Bayesian Coalescent-Based Modeling with Covariates
Gill, Mandev S.; Lemey, Philippe; Bennett, Shannon N.; Biek, Roman; Suchard, Marc A.
2016-01-01
Effective population size characterizes the genetic variability in a population and is a parameter of paramount importance in population genetics and evolutionary biology. Kingman’s coalescent process enables inference of past population dynamics directly from molecular sequence data, and researchers have developed a number of flexible coalescent-based models for Bayesian nonparametric estimation of the effective population size as a function of time. Major goals of demographic reconstruction include identifying driving factors of effective population size, and understanding the association between the effective population size and such factors. Building upon Bayesian nonparametric coalescent-based approaches, we introduce a flexible framework that incorporates time-varying covariates that exploit Gaussian Markov random fields to achieve temporal smoothing of effective population size trajectories. To approximate the posterior distribution, we adapt efficient Markov chain Monte Carlo algorithms designed for highly structured Gaussian models. Incorporating covariates into the demographic inference framework enables the modeling of associations between the effective population size and covariates while accounting for uncertainty in population histories. Furthermore, it can lead to more precise estimates of population dynamics. We apply our model to four examples. We reconstruct the demographic history of raccoon rabies in North America and find a significant association with the spatiotemporal spread of the outbreak. Next, we examine the effective population size trajectory of the DENV-4 virus in Puerto Rico along with viral isolate count data and find similar cyclic patterns. We compare the population history of the HIV-1 CRF02_AG clade in Cameroon with HIV incidence and prevalence data and find that the effective population size is more reflective of incidence rate. Finally, we explore the hypothesis that the population dynamics of musk ox during the Late Quaternary period were related to climate change. [Coalescent; effective population size; Gaussian Markov random fields; phylodynamics; phylogenetics; population genetics. PMID:27368344
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
Performance of a large building rainwater harvesting system.
Ward, S; Memon, F A; Butler, D
2012-10-15
Rainwater harvesting is increasingly becoming an integral part of the sustainable water management toolkit. Despite a plethora of studies modelling the feasibility of the utilisation of rainwater harvesting (RWH) systems in particular contexts, there remains a significant gap in knowledge in relation to detailed empirical assessments of performance. Domestic systems have been investigated to a limited degree in the literature, including in the UK, but there are few recent longitudinal studies of larger non-domestic systems. Additionally, there are few studies comparing estimated and actual performance. This paper presents the results of a longitudinal empirical performance assessment of a non-domestic RWH system located in an office building in the UK. Furthermore, it compares actual performance with the estimated performance based on two methods recommended by the British Standards Institute - the Intermediate (simple calculations) and Detailed (simulation-based) Approaches. Results highlight that the average measured water saving efficiency (amount of mains water saved) of the office-based RWH system was 87% across an 8-month period, due to the system being over-sized for the actual occupancy level. Consequently, a similar level of performance could have been achieved using a smaller-sized tank. Estimated cost savings resulted in capital payback periods of 11 and 6 years for the actual over-sized tank and the smaller optimised tank, respectively. However, more detailed cost data on maintenance and operation is required to perform whole life cost analyses. These findings indicate that office-scale RWH systems potentially offer significant water and cost savings. They also emphasise the importance of monitoring data and that a transition to the use of Detailed Approaches (particularly in the UK) is required to (a) minimise over-sizing of storage tanks and (b) build confidence in RWH system performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
Subsampling program for the estimation of fish impingement
NASA Astrophysics Data System (ADS)
Beauchamp, John J.; Kumar, K. D.
1984-11-01
Federal regulations require operators of nuclear and coal-fired power-generating stations to estimate the number of fish impinged on intake screens. During winter months, impingement may range into the hundreds of thousands for certain species, making it impossible to count all intake screens completely. We present graphs for determinig the appropriate“optimal” subsample that must be obtained to estimate the total number impinged. Since the number of fish impinged tends to change drastically within a short time period, the subsample size is determined based on the most recent data. This allows for the changing nature of the species-age composition of the impinged fish. These graphs can also be used for subsampling fish catches in an aquatic system when the size of the catch is too large to sample completely.
In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2012-12-01
Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.
Are Antarctic minke whales unusually abundant because of 20th century whaling?
Ruegg, Kristen C; Anderson, Eric C; Scott Baker, C; Vant, Murdoch; Jackson, Jennifer A; Palumbi, Stephen R
2010-01-01
Severe declines in megafauna worldwide illuminate the role of top predators in ecosystem structure. In the Antarctic, the Krill Surplus Hypothesis posits that the killing of more than 2 million large whales led to competitive release for smaller krill-eating species like the Antarctic minke whale. If true, the current size of the Antarctic minke whale population may be unusually high as an indirect result of whaling. Here, we estimate the long-term population size of the Antarctic minke whale prior to whaling by sequencing 11 nuclear genetic markers from 52 modern samples purchased in Japanese meat markets. We use coalescent simulations to explore the potential influence of population substructure and find that even though our samples are drawn from a limited geographic area, our estimate reflects ocean-wide genetic diversity. Using Bayesian estimates of the mutation rate and coalescent-based analyses of genetic diversity across loci, we calculate the long-term population size of the Antarctic minke whale to be 670,000 individuals (95% confidence interval: 374,000-1,150,000). Our estimate of long-term abundance is similar to, or greater than, contemporary abundance estimates, suggesting that managing Antarctic ecosystems under the assumption that Antarctic minke whales are unusually abundant is not warranted.
Zhao, Bin; Yang, Tianxi; Zhang, Zhiyun; Hickey, Michael E; He, Lili
2018-03-06
The large-scale manufacturing and use of titanium dioxide (TiO 2 ) particles in food and consumer products significantly increase the likelihood of human exposure and release into the environment. We present a simple and innovative approach to rapidly identify the type (anatase or rutile), as well as to estimate, the size and concentration of TiO 2 particles using Raman spectroscopy and surface-enhanced Raman spectroscopy (SERS). The identification and discrimination of rutile and anatase were based on their intrinsic Raman signatures. The concentration of the TiO 2 particles was determined based on Raman peak intensity. Particle sizes were estimated based on the ratio between the Raman intensity of TiO 2 and the SERS intensity of myricetin bound to the nanoparticles (NPs), which was proven to be independent of TiO 2 nanoparticle concentrations. The ratio that was calculated from the 100 nm particles was used as a cutoff value when estimating the presence of nanosized particles within a mixture. We also demonstrated the practical use of this approach when determining the type, concentration, and size of E171: a mixture that contains TiO 2 particles of various sizes which are commonly used in many food products as food additives. The presence of TiO 2 anatase NPs in E171 was confirmed using the developed approach and was validated by transmission electron micrographs. TiO 2 presence in pond water was also demonstrated to be an analytical capability of this method. Our approach shows great promise for the rapid screening of nanosized rutile and anatase TiO 2 particles in complex matrixes. This approach will strongly improve the measurement of TiO 2 quality during production, as well as the survey capacity and risk assessment of TiO 2 NPs in food, consumer goods, and environmental samples.
NASA Astrophysics Data System (ADS)
Costabel, Stephan; Weidner, Christoph; Müller-Petke, Mike; Houben, Georg
2018-03-01
The capability of nuclear magnetic resonance (NMR) relaxometry to characterise hydraulic properties of iron-oxide-coated sand and gravel was evaluated in a laboratory study. Past studies have shown that the presence of paramagnetic iron oxides and large pores in coarse sand and gravel disturbs the otherwise linear relationship between relaxation time and pore size. Consequently, the commonly applied empirical approaches fail when deriving hydraulic quantities from NMR parameters. Recent research demonstrates that higher relaxation modes must be taken into account to relate the size of a large pore to its NMR relaxation behaviour in the presence of significant paramagnetic impurities at its pore wall. We performed NMR relaxation experiments with water-saturated natural and reworked sands and gravels, coated with natural and synthetic ferric oxides (goethite, ferrihydrite), and show that the impact of the higher relaxation modes increases significantly with increasing iron content. Since the investigated materials exhibit narrow pore size distributions, and can thus be described by a virtual bundle of capillaries with identical apparent pore radius, recently presented inversion approaches allow for estimation of a unique solution yielding the apparent capillary radius from the NMR data. We found the NMR-based apparent radii to correspond well to the effective hydraulic radii estimated from the grain size distributions of the samples for the entire range of observed iron contents. Consequently, they can be used to estimate the hydraulic conductivity using the well-known Kozeny-Carman equation without any calibration that is otherwise necessary when predicting hydraulic conductivities from NMR data. Our future research will focus on the development of relaxation time models that consider pore size distributions. Furthermore, we plan to establish a measurement system based on borehole NMR for localising iron clogging and controlling its remediation in the gravel pack of groundwater wells.
Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall
NASA Astrophysics Data System (ADS)
Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate
2016-11-01
The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
Body size estimation of self and others in females varying in BMI.
Thaler, Anne; Geuss, Michael N; Mölbert, Simone C; Giel, Katrin E; Streuber, Stephan; Romero, Javier; Black, Michael J; Mohler, Betty J
2018-01-01
Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies.
Body size estimation of self and others in females varying in BMI
Geuss, Michael N.; Mölbert, Simone C.; Giel, Katrin E.; Streuber, Stephan; Romero, Javier; Black, Michael J.; Mohler, Betty J.
2018-01-01
Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies. PMID:29425218
Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F
2018-01-01
Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.
Estimating numbers of greater prairie-chickens using mark-resight techniques
Clifton, A.M.; Krementz, D.G.
2006-01-01
Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.
García-Gómez, Joaquín; Rosa-Zurera, Manuel; Romero-Camacho, Antonio; Jiménez-Garrido, Jesús Antonio; García-Benavides, Víctor
2018-01-01
Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE). PMID:29518927
de Monchy, Romain; Rouyer, Julien; Destrempes, François; Chayer, Boris; Cloutier, Guy; Franceschini, Emilie
2018-04-01
Quantitative ultrasound techniques based on the backscatter coefficient (BSC) have been commonly used to characterize red blood cell (RBC) aggregation. Specifically, a scattering model is fitted to measured BSC and estimated parameters can provide a meaningful description of the RBC aggregates' structure (i.e., aggregate size and compactness). In most cases, scattering models assumed monodisperse RBC aggregates. This study proposes the Effective Medium Theory combined with the polydisperse Structure Factor Model (EMTSFM) to incorporate the polydispersity of aggregate size. From the measured BSC, this model allows estimating three structural parameters: the mean radius of the aggregate size distribution, the width of the distribution, and the compactness of the aggregates. Two successive experiments were conducted: a first experiment on blood sheared in a Couette flow device coupled with an ultrasonic probe, and a second experiment, on the same blood sample, sheared in a plane-plane rheometer coupled to a light microscope. Results demonstrated that the polydisperse EMTSFM provided the best fit to the BSC data when compared to the classical monodisperse models for the higher levels of aggregation at hematocrits between 10% and 40%. Fitting the polydisperse model yielded aggregate size distributions that were consistent with direct light microscope observations at low hematocrits.
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Samei, Ehsan
2012-03-01
The purpose of this work was twofold: (a) to estimate patient- and cohort-specific radiation dose and cancer risk index for abdominopelvic computer tomography (CT) scans; (b) to evaluate the effects of patient anatomical characteristics (size, age, and gender) and CT scanner model on dose and risk conversion coefficients. The study included 100 patient models (42 pediatric models, 58 adult models) and multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which DLP-normalized-effective dose (k factor) and DLP-normalized-risk index values (q factor) were derived. The k factor showed exponential decrease with increasing patient size. For a given gender, q factor showed exponential decrease with both increasing patient size and patient age. The discrepancies in k and q factors across scanners were on average 8% and 15%, respectively. This study demonstrates the feasibility of estimating patient-specific organ dose and cohort-specific effective dose and risk index in abdominopelvic CT requiring only the knowledge of patient size, gender, and age.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Multichannel blind deconvolution of spatially misaligned images.
Sroubek, Filip; Flusser, Jan
2005-07-01
Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.
ERIC Educational Resources Information Center
Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.
2015-01-01
Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…
A physics-based algorithm for the estimation of bearing spall width using vibrations
NASA Astrophysics Data System (ADS)
Kogan, G.; Klein, R.; Bortman, J.
2018-05-01
Evaluation of the damage severity in a mechanical system is required for the assessment of its remaining useful life. In rotating machines, bearings are crucial components. Hence, the estimation of the size of spalls in bearings is important for prognostics of the remaining useful life. Recently, this topic has been extensively studied and many of the methods used for the estimation of spall size are based on the analysis of vibrations. A new tool is proposed in the current study for the estimation of the spall width on the outer ring raceway of a rolling element bearing. The understanding and analysis of the dynamics of the rolling element-spall interaction enabled the development of a generic and autonomous algorithm. The algorithm is generic in the sense that it does not require any human interference to make adjustments for each case. All of the algorithm's parameters are defined by analytical expressions describing the dynamics of the system. The required conditions, such as sampling rate, spall width and depth, defining the feasible region of such algorithms, are analyzed in the paper. The algorithm performance was demonstrated with experimental data for different spall widths.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
GONe: Software for estimating effective population size in species with generational overlap
Coombs, J.A.; Letcher, B.H.; Nislow, K.H.
2012-01-01
GONe is a user-friendly, Windows-based program for estimating effective size (N e) in populations with overlapping generations. It uses the Jorde-Ryman modification to the temporal method to account for age structure in populations. This method requires estimates of age-specific survival and birth rate and allele frequencies measured in two or more consecutive cohorts. Allele frequencies are acquired by reading in genotypic data from files formatted for either GENEPOP or TEMPOFS. For each interval between consecutive cohorts, N e is estimated at each locus and over all loci. Furthermore, N e estimates are output for three different genetic drift estimators (F s, F c and F k). Confidence intervals are derived from a chi-square distribution with degrees of freedom equal to the number of independent alleles. GONe has been validated over a wide range of N e values, and for scenarios where survival and birth rates differ between sexes, sex ratios are unequal and reproductive variances differ. GONe is freely available for download at. ?? 2011 Blackwell Publishing Ltd.
Prenatal air pollution exposure and ultrasound measures of fetal growth in Los Angeles, California.
Ritz, Beate; Qiu, Jiaheng; Lee, Pei-Chen; Lurmann, Fred; Penfold, Bryan; Erin Weiss, Robert; McConnell, Rob; Arora, Chander; Hobel, Calvin; Wilhelm, Michelle
2014-04-01
Few previous studies examined the impact of prenatal air pollution exposures on fetal development based on ultrasound measures during pregnancy. In a prospective birth cohort of more than 500 women followed during 1993-1996 in Los Angeles, California, we examined how air pollution impacts fetal growth during pregnancy. Exposure to traffic related air pollution was estimated using CALINE4 air dispersion modeling for nitrogen oxides (NOx) and a land use regression (LUR) model for nitrogen monoxide (NO), nitrogen dioxide (NO2) and NOx. Exposures to carbon monoxide (CO), NO2, ozone (O3) and particles <10μm in aerodynamic diameter (PM10) were estimated using government monitoring data. We employed a linear mixed effects model to estimate changes in fetal size at approximately 19, 29 and 37 weeks gestation based on ultrasound. Exposure to traffic-derived air pollution during 29 to 37 weeks was negatively associated with biparietal diameter at 37 weeks gestation. For each interquartile range (IQR) increase in LUR-based estimates of NO, NO2 and NOx, or freeway CALINE4 NOx we estimated a reduction in biparietal diameter of 0.2-0.3mm. For women residing within 5km of a monitoring station, we estimated biparietal diameter reductions of 0.9-1.0mm per IQR increase in CO and NO2. Effect estimates were robust to adjustment for a number of potential confounders. We did not observe consistent patterns for other growth endpoints we examined. Prenatal exposure to traffic-derived pollution was negatively associated with fetal head size measured as biparietal diameter in late pregnancy. Copyright © 2014 Elsevier Inc. All rights reserved.
Prenatal Air Pollution Exposure and Ultrasound Measures of Fetal Growth in Los Angeles, California
Ritz, Beate; Qiu, Jiaheng; Lee, Pei-Chen; Lurmann, Fred; Penfold, Bryan; Weiss, Robert Erin; McConnell, Rob; Arora, Chander; Hobel, Calvin; Wilhelm, Michelle
2014-01-01
Background Few previous studies examined the impact of prenatal air pollution exposures on fetal development based on ultrasound measures during pregnancy. Methods In a prospective birth cohort of more than 500 women followed during 1993-1996 in Los Angeles, California, we examined how air pollution impacts fetal growth during pregnancy. Exposure to traffic related air pollution was estimated using CALINE4 air dispersion modeling for nitrogen oxides (NOx) and a land use regression (LUR) model for nitrogen monoxide (NO), nitrogen dioxide (NO2) and NOx. Exposures to carbon monoxide (CO), NO2, ozone (O3) and particles <10 μm in aerodynamic diameter (PM10) were estimated using government monitoring data. We employed a linear mixed effects model to estimate changes in fetal size at approximately 19, 29 and 37 weeks gestation based on ultrasound. Results Exposure to traffic-derived air pollution during 29 to 37 weeks was negatively associated with biparietal diameter at 37 weeks gestation. For each interquartile range (IQR) increase in LUR-based estimates of NO, NO2 and NOx, or freeway CALINE4 NOx we estimated a reduction in biparietal diameter of 0.2-0.3 mm. For women residing within 5 km of a monitoring station, we estimated biparietal diameter reductions of 0.9-1.0 mm per IQR increase in CO and NO2. Effect estimates were robust to adjustment for a number of potential confounders. We did not observe consistent patterns for other growth endpoints we examined. Conclusions Prenatal exposure to traffic-derived pollution was negatively associated with fetal head size measured as biparietal diameter in late pregnancy. PMID:24517884
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
NASA Astrophysics Data System (ADS)
Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan
2015-01-01
Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
Helicopter rotor and engine sizing for preliminary performance estimation
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Bowles, J. V.; Lee, H. C.
1986-01-01
Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.
Estimating How Often Mass Extinctions Due to Impacts Occur on the Earth
NASA Technical Reports Server (NTRS)
Buratti, Bonnie J.
2013-01-01
This hands-on, inquiry based activity has been taught at JPL's summer workshop "Teachers Touch the Sky" for the past two decades. Students act as mini-investigators as they gather and analyze data to estimate how often an impact large enough to cause a mass extinction occurs on the Earth. Large craters are counted on the Moon, and this number is extrapolated to the size of the Earth. Given the age of the Solar System, the students can then estimate how often large impacts occur on the Earth. This activity is based on an idea by Dr. David Morrison, NASA Ames Research Center.
Extension of the thermal porosimetry method to high gas pressure for nanoporosimetry estimation
NASA Astrophysics Data System (ADS)
Jannot, Y.; Degiovanni, A.; Camus, M.
2018-04-01
Standard pore size determination methods like mercury porosimetry, nitrogen sorption, microscopy, or X-ray tomography are not suited to highly porous, low density, and thus very fragile materials. For this kind of materials, a method based on thermal characterization has been developed in a previous study. This method has been used with air pressure varying from 10-1 to 105 Pa for materials having a thermal conductivity less than 0.05 W m-1 K-1 at atmospheric pressure. It enables the estimation of pore size distribution between 100 nm and 1 mm. In this paper, we present a new experimental device enabling thermal conductivity measurement under gas pressure up to 106 Pa, enabling the estimation of the volume fraction of pores having a 10 nm diameter. It is also demonstrated that the main thermal conductivity models (parallel, series, Maxwell, Bruggeman, self-consistent) lead to the same estimation of the pore size distribution as the extended parallel model (EPM) presented in this paper and then used to process the experimental data. Three materials with thermal conductivities at atmospheric pressure ranging from 0.014 W m-1 K-1 to 0.04 W m-1 K-1 are studied. The thermal conductivity measurement results obtained with the three materials are presented, and the corresponding pore size distributions between 10 nm and 1 mm are presented and discussed.
NASA Astrophysics Data System (ADS)
Watson, James R.; Stock, Charles A.; Sarmiento, Jorge L.
2015-11-01
Modeling the dynamics of marine populations at a global scale - from phytoplankton to fish - is necessary if we are to quantify how climate change and other broad-scale anthropogenic actions affect the supply of marine-based food. Here, we estimate the abundance and distribution of fish biomass using a simple size-based food web model coupled to simulations of global ocean physics and biogeochemistry. We focus on the spatial distribution of biomass, identifying highly productive regions - shelf seas, western boundary currents and major upwelling zones. In the absence of fishing, we estimate the total ocean fish biomass to be ∼ 2.84 ×109 tonnes, similar to previous estimates. However, this value is sensitive to the choice of parameters, and further, allowing fish to move had a profound impact on the spatial distribution of fish biomass and the structure of marine communities. In particular, when movement is implemented the viable range of large predators is greatly increased, and stunted biomass spectra characterizing large ocean regions in simulations without movement, are replaced with expanded spectra that include large predators. These results highlight the importance of considering movement in global-scale ecological models.
Britton, Annie; O’Neill, Darragh; Bell, Steven
2016-01-01
Aims Increases in glass sizes and wine strength over the last 25 years in the UK are likely to have led to an underestimation of alcohol intake in population studies. We explore whether this probable misclassification affects the association between average alcohol intake and risk of mortality from all causes, cardiovascular disease and cancer. Methods Self-reported alcohol consumption in 1997–1999 among 7010 men and women in the Whitehall II cohort of British civil servants was linked to the risk of mortality until mid-2015. A conversion factor of 8 g of alcohol per wine glass (1 unit) was compared with a conversion of 16 g per wine glass (2 units). Results When applying a higher alcohol content conversion for wine consumption, the proportion of heavy/very heavy drinkers increased from 28% to 41% for men and 15% to 28% for women. There was a significantly increased risk of very heavy drinking compared with moderate drinking for deaths from all causes and cancer before and after change in wine conversion; however, the hazard ratios were reduced when a higher wine conversion was used. Conclusions In this population-based study, assuming higher alcohol content in wine glasses changed the estimates of mortality risk. We propose that investigator-led cohorts need to revisit conversion factors based on more accurate estimates of alcohol content in wine glasses. Prospectively, researchers need to collect more detailed information on alcohol including serving sizes and strength. Short summary The alcohol content in a wine glass is likely to be underestimated in population surveys as wine strength and serving size have increased in recent years. We demonstrate that in a large cohort study, this underestimation affects estimates of mortality risk. Investigator-led cohorts need to revisit conversion factors based on more accurate estimates of alcohol content in wine glasses. PMID:27261472
Assessing allowable take of migratory birds
Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.
2009-01-01
Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was < 3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be < rmax. Further, the ratio of desired harvest rate to 0.5 x rmax may be a useful metric for ascertaining the applicability of specific requirements of the National Environmental Protection Act.
[Study on accuracy of endoscopic polyp size measurement by disposable graduated biopsy forceps].
Liu, Ping; Zhang, Xiu; Lin, Hui-ping; Jin, Hei-jing; Leng, Qiang; Zhang, Jin-hao; Zhang, Yang; Yao, Hang; Wu, Kun-lan
2013-12-01
To study the accuracy of endoscopic polyp size measurement by disposable graduated biopsy forceps (DGBF). Accurate gradation of 1 mm was made in the wire of disposable graduated biopsy forceps, which was used to measure the size of tumors under endoscopy. Fifty-eight polyps from 43 patients underwent endoscopy in our department from May to June 2013 were enrolled. Size of polyp was measured and compared among DGBF, routine estimation and direct measurement after resection. The accuracy of polyp size measurement was investigated by four colonoscopists who had finished at least 2000 procedures of colonoscopy. The mean diameter of post-polypectomy measurement was (1.02±0.84) cm. Diameter was less than 1 cm in 36 polyps, 1 to 2 cm in 15, and over 2 cm in 7. The mean diameter of visual estimation was (1.29±1.07) cm, and the difference was significant as compared with actual size (P=0.000). The mean diameter measured by DGBF was (1.02±0.82) cm, and the difference was not significant as compared with actual size (P=0.775). The ratio of visual estimation to actual size was 1.29±0.31, and DGBF estimation to actual size was 1.02±0.11 with significant difference (P=0.000). The accurate rate of DGBF in estimating polyp size was 77.6% (45/58), which was obviously higher as compared to visual estimation [19.0% (11/58), P=0.000]. The accuracy of DGBF as a scale in the estimation of poly size increases as compared to visual estimation.
Mathieu, Julie; Bootsma, Reinoud J; Berthelon, Catherine; Montagne, Gilles
2017-02-01
Using a fixed-base driving simulator we compared the effects of the size and type of traffic vehicles (i.e., normal-sized or double-sized cars or motorcycles) approaching an intersection in two different tasks. In the perceptual judgment task, passively moving participants estimated when a traffic vehicle would reach the intersection for actual arrival times (ATs) of 1, 2, or 3s. In line with earlier findings, ATs were generally underestimated, the more so the longer the actual AT. Results revealed that vehicle size affected judgments in particular for the larger actual ATs (2 and 3s), with double-sized vehicles then being judged as arriving earlier than normal-sized vehicles. Vehicle type, on the other hand, affected judgments at the smaller actual ATs (1 and 2s), with cars then being judged as arriving earlier than motorcycles. In the behavioral task participants actively drove the simulator to cross the intersection by passing through a gap in a train of traffic. Analyses of the speed variations observed during the active intersection-crossing task revealed that the size and type of vehicles in the traffic train did not affect driving behavior in the same way as in the AT judgment task. First, effects were considerably smaller, affecting driving behavior only marginally. Second, effects were opposite to expectations based on AT judgments: driver approach speeds were smaller (rather than larger) when confronted with double-sized vehicles as compared to their normal-sized counterparts and when confronted with cars as compared to motorcycles. Finally, the temporality of the effects was different on the two tasks: vehicle size affected driver approach speed in the final stages of approach rather than early on, while vehicle type affected driver approach speed early on rather than later. Overall, we conclude that the active control of approach to the intersection is not based on successive judgments of traffic vehicle arrival times. These results thereby question the general belief that arrival time estimates are crucial for safe interaction with traffic. Copyright © 2016 Elsevier B.V. All rights reserved.
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
Single bubble of an electronegative gas in transformer oil in the presence of an electric field
NASA Astrophysics Data System (ADS)
Gadzhiev, M. Kh.; Tyuftyaev, A. S.; Il'ichev, M. V.
2017-10-01
The influence of the electric field on a single air bubble in transformer oil has been studied. It has been shown that, depending on its size, the bubble may initiate breakdown. The sizes of air and sulfur hexafluoride bubbles at which breakdown will not be observed have been estimated based on the condition for the avalanche-to-streamer transition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udevitz, M.S.; Bodkin, J.L.; Costa, D.P.
1995-05-01
Boat-based surveys were used to monitor the Prince William Sound sea otter population before and after the Exxon Valdez oil spill. Population and loss estimates could be obtained from these surveys by direct expansion from the counts in the surveyed transects under the assumption that all otters in those transects were observed. The authors conducted a pilot study using ground-based observers in conjunction with the August 1990 survey of marine mammals and birds to investigate the validity of this assumption. The proportion of otters detected by boat crews was estimated by comparing boat and ground-based observations on 22 segments ofmore » shoreline transects. Overall, the authors estimated that only 70% of the otters in surveyed shoreline transects were detected by the boat crews. These results suggest that unadjusted expansions of boat survey transect counts will underestimate sea otter population size and that loss estimates based on comparisons of unadjusted population estimates will be biased.« less
Cross Validation of Rain Drop Size Distribution between GPM and Ground Based Polarmetric radar
NASA Astrophysics Data System (ADS)
Chandra, C. V.; Biswas, S.; Le, M.; Chen, H.
2017-12-01
Dual-frequency precipitation radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has reflectivity measurements at two independent frequencies, Ku- and Ka- band. Dual-frequency retrieval algorithms have been developed traditionally through forward, backward, and recursive approaches. However, these algorithms suffer from "dual-value" problem when they retrieve medium volume diameter from dual-frequency ratio (DFR) in rain region. To this end, a hybrid method has been proposed to perform raindrop size distribution (DSD) retrieval for GPM using a linear constraint of DSD along rain profile to avoid "dual-value" problem (Le and Chandrasekar, 2015). In the current GPM level 2 algorithm (Iguchi et al. 2017- Algorithm Theoretical Basis Document) the Solver module retrieves a vertical profile of drop size distributionn from dual-frequency observations and path integrated attenuations. The algorithm details can be found in Seto et al. (2013) . On the other hand, ground based polarimetric radars have been used for a long time to estimate drop size distributions (e.g., Gorgucci et al. 2002 ). In addition, coincident GPM and ground based observations have been cross validated using careful overpass analysis. In this paper, we perform cross validation on raindrop size distribution retrieval from three sources, namely the hybrid method, the standard products from the solver module and DSD retrievals from ground polarimetric radars. The results are presented from two NEXRAD radars located in Dallas -Fort Worth, Texas (i.e., KFWS radar) and Melbourne, Florida (i.e., KMLB radar). The results demonstrate the ability of DPR observations to produce DSD estimates, which can be used subsequently to generate global DSD maps. References: Seto, S., T. Iguchi, T. Oki, 2013: The basic performance of a precipitation retrieval algorithm for the Global Precipitation Measurement mission's single/dual-frequency radar measurements. IEEE Transactions on Geoscience and Remote Sensing, 51(12), 5239-5251. Gorgucci, E., Chandrasekar, V., Bringi, V. N., and Scarchilli, G.: Estimation of Raindrop Size Distribution Parameters from Polarimetric Radar Measurements, J. Atmos. Sci., 59, 2373-2384, doi:10.1175/1520-0469(2002)0592.0.CO;2, 2002.
NASA Astrophysics Data System (ADS)
Avanaki, Ali R. N.; Espig, Kathryn; Knippel, Eddie; Kimpe, Tom R. L.; Xthona, Albert; Maidment, Andrew D. A.
2016-03-01
In this paper, we specify a notion of background tissue complexity (BTC) as perceived by a human observer that is suited for use with model observers. This notion of BTC is a function of image location and lesion shape and size. We propose four unsupervised BTC estimators based on: (i) perceived pre- and post-lesion similarity of images, (ii) lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), (iii) tissue anomaly detection, and (iv) mammogram density measurement. The latter two are existing methods we adapt for location- and lesion-dependent BTC estimation. To validate the BTC estimators, we ask human observers to measure BTC as the visibility threshold amplitude of an inserted lesion at specified locations in a mammogram. Both human-measured and computationally estimated BTC varied with lesion shape (from circular to oval), size (from small circular to larger circular), and location (different points across a mammogram). BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are highly correlated to each other (0.84
Intra-specific competition (crowding) of giant sequoias (Sequoiadendron giganteum)
Stohlgren, Thomas J.
1993-01-01
Information on the size and location of 1916 giant sequoias (Sequoiadendron giganteum (Lindl.) Buchholz) in Muir Grove, Sequoia National Park, in the southern Sierra Nevada of California was used to assess intra-specific crowding. Study objectives were to: (1) determine which parameters associated with intra-specific competition (i.e. size and distance to nearest neighbor, crowding/root system area overlap, or number of neighbors) might be important in spatial pattern development, growth, and survivorship of established giant sequoias; (2) quantify the level of intra-specific crowding of different sized live sequoias based on a model of estimated overlapping root system areas (i.e. an index of relative crowding); (3) compare the level of intra-specific crowding of similarly sized live and dead giant sequoias (less than 30 cm diameter at breast height (dbh) at the time of inventory (1969). Mean distances to the nearest live giant sequoia neighbor were not significantly different (at α = 0.05) for live and dead sequoias in similar size classes. A zone of influence competition model (i.e. index of crowding) based on horizontal overlap of estimated root system areas was developed for 1753 live sequoias. The model, based only on the spatial arrangement of live sequoias, was then tested on dead sequoias of less than 30 cm dbh (n = 163 trees; also recorded in 1969). The dead sequoias had a significantly higher crowding index than 561 live trees of similar diameter. Results showed that dead sequoias of less than 16.6 cm dbh had a significantly greater mean number of live neighbors and mean crowding index than live sequoias of similar size. Intra-specific crowding may be an important mechanism in determining the spatial distribution of sequoias in old-growth forests.
Ketz, Alison C; Johnson, Therese L; Monello, Ryan J; Mack, John A; George, Janet L; Kraft, Benjamin R; Wild, Margaret A; Hooten, Mevin B; Hobbs, N Thompson
2018-04-01
Accurate assessment of abundance forms a central challenge in population ecology and wildlife management. Many statistical techniques have been developed to estimate population sizes because populations change over time and space and to correct for the bias resulting from animals that are present in a study area but not observed. The mobility of individuals makes it difficult to design sampling procedures that account for movement into and out of areas with fixed jurisdictional boundaries. Aerial surveys are the gold standard used to obtain data of large mobile species in geographic regions with harsh terrain, but these surveys can be prohibitively expensive and dangerous. Estimating abundance with ground-based census methods have practical advantages, but it can be difficult to simultaneously account for temporary emigration and observer error to avoid biased results. Contemporary research in population ecology increasingly relies on telemetry observations of the states and locations of individuals to gain insight on vital rates, animal movements, and population abundance. Analytical models that use observations of movements to improve estimates of abundance have not been developed. Here we build upon existing multi-state mark-recapture methods using a hierarchical N-mixture model with multiple sources of data, including telemetry data on locations of individuals, to improve estimates of population sizes. We used a state-space approach to model animal movements to approximate the number of marked animals present within the study area at any observation period, thereby accounting for a frequently changing number of marked individuals. We illustrate the approach using data on a population of elk (Cervus elaphus nelsoni) in Northern Colorado, USA. We demonstrate substantial improvement compared to existing abundance estimation methods and corroborate our results from the ground based surveys with estimates from aerial surveys during the same seasons. We develop a hierarchical Bayesian N-mixture model using multiple sources of data on abundance, movement and survival to estimate the population size of a mobile species that uses remote conservation areas. The model improves accuracy of inference relative to previous methods for estimating abundance of open populations. © 2018 by the Ecological Society of America.
A mass-density model can account for the size-weight illusion.
Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
Using Light Curves to Characterize Size and Shape of Pseudo-Debris
NASA Technical Reports Server (NTRS)
Rodriquez, Heather M.; Abercromby, Kira J.; Jarvis, Kandy S.; Barker, Edwin
2006-01-01
Photometric measurements were collected for a new study aimed at estimating orbital debris sizes based on object brightness. To obtain a size from optical measurements the current practice is to assume an albedo and use a normalized magnitude to calculate optical size. However, assuming a single albedo value may not be valid for all objects or orbit types; material type and orientation can mask an object s true optical cross section. This experiment used a CCD camera to record data, a 300 W Xenon, Ozone Free collimated light source to simulate solar illumination, and a robotic arm with five degrees of freedom to move the piece of simulated debris through various orientations. The pseudo-debris pieces used in this experiment originate from the European Space Operations Centre s ESOC2 ground test explosion of a mock satellite. A uniformly illuminated white ping-pong ball was used as a zero-magnitude reference. Each debris piece was then moved through specific orientations and rotations to generate a light curve. This paper discusses the results of five different object-based light curves as measured through an x-rotation. Intensity measurements, from which each light curve was generated, were recorded in five degree increments from zero to 180 degrees. Comparing light curves of different shaped and sized pieces against their characteristic length establishes the start of a database from which an optical size estimation model will be derived in the future.
Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size
NASA Technical Reports Server (NTRS)
Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.
2008-01-01
Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.
In vivo imaging of cancer cell size and cellularity using temporal diffusion spectroscopy.
Jiang, Xiaoyu; Li, Hua; Xie, Jingping; McKinley, Eliot T; Zhao, Ping; Gore, John C; Xu, Junzhong
2017-07-01
A temporal diffusion MRI spectroscopy based approach has been developed to quantify cancer cell size and density in vivo. A novel imaging microstructural parameters using limited spectrally edited diffusion (IMPULSED) method selects a specific limited diffusion spectral window for an accurate quantification of cell sizes ranging from 10 to 20 μm in common solid tumors. In practice, it is achieved by a combination of a single long diffusion time pulsed gradient spin echo (PGSE) and three low-frequency oscillating gradient spin echo (OGSE) acquisitions. To validate our approach, hematoxylin and eosin staining and immunostaining of cell membranes, in concert with whole slide imaging, were used to visualize nuclei and cell boundaries, and hence, enabled accurate estimates of cell size and cellularity. Based on a two compartment model (incorporating intra- and extracellular spaces), accurate estimates of cell sizes were obtained in vivo for three types of human colon cancers. The IMPULSED-derived apparent cellularities showed a stronger correlation (r = 0.81; P < 0.0001) with histology-derived cellularities than conventional ADCs (r = -0.69; P < 0.03). The IMPULSED approach samples a specific region of temporal diffusion spectra with enhanced sensitivity to length scales of 10-20 μm, and enables measurements of cell sizes and cellularities in solid tumors in vivo. Magn Reson Med 78:156-164, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Gasca-Pineda, Jaime; Cassaigne, Ivonne; Alonso, Rogelio A.; Eguiarte, Luis E.
2013-01-01
The amount of genetic diversity in a finite biological population mostly depends on the interactions among evolutionary forces and the effective population size (N e) as well as the time since population establishment. Because the N e estimation helps to explore population demographic history, and allows one to predict the behavior of genetic diversity through time, N e is a key parameter for the genetic management of small and isolated populations. Here, we explored an N e-based approach using a bighorn sheep population on Tiburon Island, Mexico (TI) as a model. We estimated the current (N crnt) and ancestral stable (N stbl) inbreeding effective population sizes as well as summary statistics to assess genetic diversity and the demographic scenarios that could explain such diversity. Then, we evaluated the feasibility of using TI as a source population for reintroduction programs. We also included data from other bighorn sheep and artiodactyl populations in the analysis to compare their inbreeding effective size estimates. The TI population showed high levels of genetic diversity with respect to other managed populations. However, our analysis suggested that TI has been under a genetic bottleneck, indicating that using individuals from this population as the only source for reintroduction could lead to a severe genetic diversity reduction. Analyses of the published data did not show a strict correlation between H E and N crnt estimates. Moreover, we detected that ancient anthropogenic and climatic pressures affected all studied populations. We conclude that the estimation of N crnt and N stbl are informative genetic diversity estimators and should be used in addition to summary statistics for conservation and population management planning. PMID:24147115
Comparison and assessment of aerial and ground estimates of waterbird colonies
Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.
2008-01-01
Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
Horowitz, Arthur J.; Clarke, Robin T.; Merten, Gustavo Henrique
2015-01-01
Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment-associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year−1) and sample scheduling (random-based, calendar-based and hydrology-based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year−1 to about 200 Mt year−1. The results appear to indicate that there is a scale effect for random-based and calendar-based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology-based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment-associated chemical concentrations, annual suspended sediment and annual suspended sediment-associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth-integrated and cross-sectionally representative data.
Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation
NASA Astrophysics Data System (ADS)
Luo, L.; Cheng, Z.
2016-12-01
In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.
Dangerous Near-Earth Asteroids and Meteorites
NASA Astrophysics Data System (ADS)
Mickaelian, A. M.; Grigoryan, A. E.
2015-07-01
The problem of Near-Earth Objects (NEOs; Astreoids and Meteorites) is discussed. To have an understanding on the probablity of encounters with such objects, one may use two different approaches: 1) historical, based on the statistics of existing large meteorite craters on the Earth, estimation of the source meteorites size and the age of these craters to derive the frequency of encounters with a given size of meteorites and 2) astronomical, based on the study and cataloging of all medium-size and large bodies in the Earth's neighbourhood and their orbits to estimate the probability, angles and other parameters of encounters. Therefore, we discuss both aspects and give our present knowledge on both phenomena. Though dangerous NEOs are one of the main source for cosmic catastrophes, we also focus on other possible dangers, such as even slight changes of Solar irradiance or Earth's orbit, change of Moon's impact on Earth, Solar flares or other manifestations of Solar activity, transit of comets (with impact on Earth's atmosphere), global climate change, dilution of Earth's atmosphere, damage of ozone layer, explosion of nearby Supernovae, and even an attack by extraterrestrial intelligence.
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.
Mapped Plot Patch Size Estimates
Paul C. Van Deusen
2005-01-01
This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...
Overcoming the winner's curse: estimating penetrance parameters from case-control data.
Zollner, Sebastian; Pritchard, Jonathan K
2007-04-01
Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.
Long-term effective population size dynamics of an intensively monitored vertebrate population
Mueller, A-K; Chakarov, N; Krüger, O; Hoffman, J I
2016-01-01
Long-term genetic data from intensively monitored natural populations are important for understanding how effective population sizes (Ne) can vary over time. We therefore genotyped 1622 common buzzard (Buteo buteo) chicks sampled over 12 consecutive years (2002–2013 inclusive) at 15 microsatellite loci. This data set allowed us to both compare single-sample with temporal approaches and explore temporal patterns in the effective number of parents that produced each cohort in relation to the observed population dynamics. We found reasonable consistency between linkage disequilibrium-based single-sample and temporal estimators, particularly during the latter half of the study, but no clear relationship between annual Ne estimates () and census sizes. We also documented a 14-fold increase in between 2008 and 2011, a period during which the census size doubled, probably reflecting a combination of higher adult survival and immigration from further afield. Our study thus reveals appreciable temporal heterogeneity in the effective population size of a natural vertebrate population, confirms the need for long-term studies and cautions against drawing conclusions from a single sample. PMID:27553455
Bernstein, Kyle T; Sullivan, Patrick S; Purcell, David W; Chesson, Harrell W; Gift, Thomas L; Rosenberg, Eli S
2016-01-01
Background In the United States, male-to-male sexual transmission accounts for the greatest number of new human immunodeficiency virus (HIV) diagnoses and a substantial number of sexually transmitted infections (STI) annually. However, the prevalence and annual incidence of HIV and other STIs among men who have sex with men (MSM) cannot be estimated in local contexts because demographic data on sexual behavior, particularly same-sex behavior, are not routinely collected by large-scale surveys that allow analysis at state, county, or finer levels, such as the US decennial census or the American Community Survey (ACS). Therefore, techniques for indirectly estimating population sizes of MSM are necessary to supply denominators for rates at various geographic levels. Objective Our objectives were to indirectly estimate MSM population sizes at the county level to incorporate recent data estimates and to aggregate county-level estimates to states and core-based statistical areas (CBSAs). Methods We used data from the ACS to calculate a weight for each county in the United States based on its relative proportion of households that were headed by a male who lived with a male partner, compared with the overall proportion among counties at the same level of urbanicity (ie, large central metropolitan county, large fringe metropolitan county, medium/small metropolitan county, or nonmetropolitan county). We then used this weight to adjust the urbanicity-stratified percentage of adult men who had sex with a man in the past year, according to estimates derived from the National Health and Nutrition Examination Survey (NHANES), for each county. We multiplied the weighted percentages by the number of adult men in each county to estimate its number of MSM, summing county-level estimates to create state- and CBSA-level estimates. Finally, we scaled our estimated MSM population sizes to a meta-analytic estimate of the percentage of US MSM in the past 5 years (3.9%). Results We found that the percentage of MSM among adult men ranged from 1.5% (Wyoming) to 6.0% (Rhode Island) among states. Over one-quarter of MSM in the United States resided in 1 of 13 counties. Among counties with over 300,000 residents, the five highest county-level percentages of MSM were San Francisco County, California at 18.5% (66,586/359,566); New York County, New York at 13.8% (87,556/635,847); Denver County, Colorado at 10.5% (25,465/243,002); Multnomah County, Oregon at 9.9% (28,949/292,450); and Suffolk County, Massachusetts at 9.1% (26,338/289,634). Although California (n=792,750) and Los Angeles County (n=251,521) had the largest MSM populations of states and counties, respectively, the New York City-Newark-Jersey City CBSA had the most MSM of all CBSAs (n=397,399). Conclusions We used a new method to generate small-area estimates of MSM populations, incorporating prior work, recent data, and urbanicity-specific parameters. We also used an imputation approach to estimate MSM in rural areas, where same-sex sexual behavior may be underreported. Our approach yielded estimates of MSM population sizes within states, counties, and metropolitan areas in the United States, which provide denominators for calculation of HIV and STI prevalence and incidence at those geographic levels. PMID:27227149
Grey, Jeremy A; Bernstein, Kyle T; Sullivan, Patrick S; Purcell, David W; Chesson, Harrell W; Gift, Thomas L; Rosenberg, Eli S
2016-01-01
In the United States, male-to-male sexual transmission accounts for the greatest number of new human immunodeficiency virus (HIV) diagnoses and a substantial number of sexually transmitted infections (STI) annually. However, the prevalence and annual incidence of HIV and other STIs among men who have sex with men (MSM) cannot be estimated in local contexts because demographic data on sexual behavior, particularly same-sex behavior, are not routinely collected by large-scale surveys that allow analysis at state, county, or finer levels, such as the US decennial census or the American Community Survey (ACS). Therefore, techniques for indirectly estimating population sizes of MSM are necessary to supply denominators for rates at various geographic levels. Our objectives were to indirectly estimate MSM population sizes at the county level to incorporate recent data estimates and to aggregate county-level estimates to states and core-based statistical areas (CBSAs). We used data from the ACS to calculate a weight for each county in the United States based on its relative proportion of households that were headed by a male who lived with a male partner, compared with the overall proportion among counties at the same level of urbanicity (ie, large central metropolitan county, large fringe metropolitan county, medium/small metropolitan county, or nonmetropolitan county). We then used this weight to adjust the urbanicity-stratified percentage of adult men who had sex with a man in the past year, according to estimates derived from the National Health and Nutrition Examination Survey (NHANES), for each county. We multiplied the weighted percentages by the number of adult men in each county to estimate its number of MSM, summing county-level estimates to create state- and CBSA-level estimates. Finally, we scaled our estimated MSM population sizes to a meta-analytic estimate of the percentage of US MSM in the past 5 years (3.9%). We found that the percentage of MSM among adult men ranged from 1.5% (Wyoming) to 6.0% (Rhode Island) among states. Over one-quarter of MSM in the United States resided in 1 of 13 counties. Among counties with over 300,000 residents, the five highest county-level percentages of MSM were San Francisco County, California at 18.5% (66,586/359,566); New York County, New York at 13.8% (87,556/635,847); Denver County, Colorado at 10.5% (25,465/243,002); Multnomah County, Oregon at 9.9% (28,949/292,450); and Suffolk County, Massachusetts at 9.1% (26,338/289,634). Although California (n=792,750) and Los Angeles County (n=251,521) had the largest MSM populations of states and counties, respectively, the New York City-Newark-Jersey City CBSA had the most MSM of all CBSAs (n=397,399). We used a new method to generate small-area estimates of MSM populations, incorporating prior work, recent data, and urbanicity-specific parameters. We also used an imputation approach to estimate MSM in rural areas, where same-sex sexual behavior may be underreported. Our approach yielded estimates of MSM population sizes within states, counties, and metropolitan areas in the United States, which provide denominators for calculation of HIV and STI prevalence and incidence at those geographic levels.
On estimation of time-dependent attributable fraction from population-based case-control studies.
Zhao, Wei; Chen, Ying Qing; Hsu, Li
2017-09-01
Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.
Wellek, Stefan
2017-02-28
In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Robustness of methods for blinded sample size re-estimation with overdispersed count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-09-20
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.
Di Maria, Francesco; Bianconi, Francesco; Micale, Caterina; Baglioni, Stefano; Marionni, Moreno
2016-02-01
The size distribution of aggregates has direct and important effects on fundamental properties of construction materials such as workability, strength and durability. The size distribution of aggregates from construction and demolition waste (C&D) is one of the parameters which determine the degree of recyclability and therefore the quality of such materials. Unfortunately, standard methods like sieving or laser diffraction can be either very time consuming (sieving) or possible only in laboratory conditions (laser diffraction). As an alternative we propose and evaluate the use of image analysis to estimate the size distribution of aggregates from C&D in a fast yet accurate manner. The effectiveness of the procedure was tested on aggregates generated by an existing C&D mechanical treatment plant. Experimental comparison with manual sieving showed agreement in the range 81-85%. The proposed technique demonstrated potential for being used on on-line systems within mechanical treatment plants of C&D. Copyright © 2015 Elsevier Ltd. All rights reserved.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
Controls on carbon consumption during Alaskan wildland fires
Eric S. Kasischke; Elizabeth E. Hoy
2012-01-01
A method was developed to estimate carbon consumed during wildland fires in interior Alaska based on medium-spatial scale data (60 m cell size) generated on a daily basis. Carbon consumption estimates were developed for 41 fire events in the large fire year of 2004 and 34 fire events from the small fire years of 2006-2008. Total carbon consumed during the large fire...
Population trends for North American winter birds based on hierarchical models
Soykan, Candan U.; Sauer, John; Schuetz, Justin G.; LeBaron, Geoffrey S.; Dale, Kathy; Langham, Gary M.
2016-01-01
Managing widespread and persistent threats to birds requires knowledge of population dynamics at large spatial and temporal scales. For over 100 yrs, the Audubon Christmas Bird Count (CBC) has enlisted volunteers in bird monitoring efforts that span the Americas, especially southern Canada and the United States. We employed a Bayesian hierarchical model to control for variation in survey effort among CBC circles and, using CBC data from 1966 to 2013, generated early-winter population trend estimates for 551 species of birds. Selecting a subset of species that do not frequent bird feeders and have ≥25% range overlap with the distribution of CBC circles (228 species) we further estimated aggregate (i.e., across species) trends for the entire study region and at the level of states/provinces, Bird Conservation Regions, and Landscape Conservation Cooperatives. Moreover, we examined the relationship between ten biological traits—range size, population size, migratory strategy, habitat affiliation, body size, diet, number of eggs per clutch, age at sexual maturity, lifespan, and tolerance of urban/suburban settings—and CBC trend estimates. Our results indicate that 68% of the 551 species had increasing trends within the study area over the interval 1966–2013. When trends were examined across the subset of 228 species, the median population trend for the group was 0.9% per year at the continental level. At the regional level, aggregate trends were positive in all but a few areas. Negative population trends were evident in lower latitudes, whereas the largest increases were at higher latitudes, a pattern consistent with range shifts due to climate change. Nine of 10 biological traits were significantly associated with median population trend; however, none of the traits explained >34% of the deviance in the data, reflecting the indirect relationships between population trend estimates and species traits. Trend estimates based on the CBC are broadly congruent with estimates based on the North American Breeding Bird Survey, another large-scale monitoring program. Both of these efforts, conducted by citizen scientists, will be required going forward to ensure robust inference about population dynamics in the face of climate and land cover changes.
Curtis L. VanderSchaaf; Harold E. Burkhart
2010-01-01
Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...
Yamaguchi, Sachi; Seki, Satoko; Sawada, Kota; Takahashi, Satoshi
2013-01-21
Sex change is known from various fish species. In many polygynous species, the largest female usually changes sex to male when the dominant male disappeared, as predicted by the classical size-advantage model. However, in some fishes, the disappearance of male often induces sex change by a smaller female, instead of the largest one. The halfmoon triggerfish Sufflamen chrysopterum is one of such species. We conducted both field investigation and theoretical analysis to test the hypothesis that variation in female fecundity causes the sex change by less-fertile females, even if they are not the largest. We estimated the effect of body length and residual body width (an indicator of nutrition status) on clutch size based on field data. Sex-specific growth rates were also estimated from our investigation and a previous study. We incorporated these estimated value into an evolutionarily stable strategy model for status-dependent size at sex change. As a result, we predict that rich females change sex at a larger size than poor ones, since a rich fish can achieve high reproductive success as a female. In some situations, richer females no longer change sex (i.e. lifelong females), and poorer fish changes sex just after maturation (i.e. primary males). We also analyzed the effect of size-specific growth and mortality. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Interrupted Power Law and the Size of Shadow Banking
Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio
2014-01-01
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is “interrupted” by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an “interrupted” Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate–which we propose as a shadow banking index–compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity. PMID:24728096
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
Karami, Manoochehr; Khazaei, Salman; Poorolajal, Jalal; Soltanian, Alireza; Sajadipoor, Mansour
2017-08-01
There is no reliable estimate of the size of female sex workers (FSWs). This study aimed to estimate the size of FSWs in south of Tehran, Iran in 2016 using direct capture-recapture method. In the capture phase, the hangouts of FSWs were mapped as their meeting places. FSWs who agreed to participate in the study tagged with a T-shirt. The recapture phase was implemented at the same places tagging FSWs with a blue bracelet. The total estimated size of FSWs was 690 (95% CI 633, 747). About 89.43% of FSWs experienced sexual intercourse prior to age 20. The prevalence of human immunodeficiency virus infection among FSWs was 4.60%. The estimated population size of FSWs was much more than our expectation. This issue must be the focus of special attention for planning prevention strategies. However, alternative estimates require to estimating the number FSWs, reliably.
Treatment effect on biases in size estimation in spider phobia.
Shiban, Youssef; Fruth, Martina B; Pauli, Paul; Kinateder, Max; Reichenberger, Jonas; Mühlberger, Andreas
2016-12-01
The current study investigates biases in size estimations made by spider-phobic and healthy participants before and after treatment. Forty-one spider-phobic and 20 healthy participants received virtual reality (VR) exposure treatment and were then asked to rate the size of a real spider immediately before and, on average, 15days after the treatment. During the VR exposure treatment skin conductance response was assessed. Prior to the treatment, both groups tended to overestimate the size of the spider, but this size estimation bias was significantly larger in the phobic group than in the control group. The VR exposure treatment reduced this bias, which was reflected in a significantly smaller size rating post treatment. However, the size estimation bias was unrelated to the skin conductance response. Our results confirm the hypothesis that size estimation by spider-phobic patients is biased. This bias is not stable over time and can be decreased with adequate treatment. Copyright © 2016 Elsevier B.V. All rights reserved.
Braaten, P.J.; Fuller, D.B.; Lott, R.D.; Jordan, G.R.
2009-01-01
Juvenile pallid sturgeon Scaphirhynchus albus raised in hatcheries and stocked in the wild are used to augment critically imperiled populations of this federally endangered species in the United States. For pallid sturgeon in recovery priority management area 2 (RPMA 2) of the Missouri River and lower Yellowstone River where natural recruitment has not occurred for decades, restoration programs aim to stock an annual minimum of 9000 juvenile pallid sturgeon for 20 years to re-establish a minimum population of 1700 adults. However, establishment of this target was based on general guidelines for maintaining the genetic integrity of populations rather than pallid sturgeon-specific demographic information because data on the historical population size was lacking. In this study, information from a recent population estimate (158 wild adults in 2004, 95% confidence interval 129-193 adults) and an empirically derived adult mortality rate (5%) was used in a cohort population model to back-estimate the historic abundance of adult pallid sturgeon in RPMA 2. Three back-estimation age models were developed, and assumed that adults alive during 2004 were 30-, 40-, or 50-years old. Based on these age assumptions, population sizes [??95% confidence intervals; (CI)] were back-estimated to 1989, 1979, and 1969 to approximate size of the population when individuals would have been sexually mature (15 years old) and capable of spawning. Back-estimations yielded predictions of 344 adults in 1989 (95% CI 281-420), 577 adults in 1979 (95% CI 471-704), and 968 adults in 1969 (95% CI 790-1182) for the 30-, 40-, and 50-year age models, respectively. Although several assumptions are inherent in the back-estimation models, results suggest the juvenile stocking program for pallid sturgeon will likely re-establish an adult population that equals in the short-term and exceeds in the long-term the predicted population numbers that occurred during past decades in RPMA 2. However, re-establishment of a large population in RPMA 2 that exceeds populations present 40+ years ago should be considered conservatively, as this strategy will increase the number of reproductive adults and thereby increase the likelihood for natural recruitment in this recruitment-limited system. ?? 2009 Blackwell Verlag GmbH.
Evaluating Satellite-based Rainfall Estimates for Basin-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Yilmaz, K. K.; Hogue, T. S.; Hsu, K.; Gupta, H. V.; Mahani, S. E.; Sorooshian, S.
2003-12-01
The reliability of any hydrologic simulation and basin outflow prediction effort depends primarily on the rainfall estimates. The problem of estimating rainfall becomes more obvious in basins with scarce or no rain gauges. We present an evaluation of satellite-based rainfall estimates for basin-scale hydrologic modeling with particular interest in ungauged basins. The initial phase of this study focuses on comparison of mean areal rainfall estimates from ground-based rain gauge network, NEXRAD radar Stage-III, and satellite-based PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and their influence on hydrologic model simulations over several basins in the U.S. Six-hourly accumulations of the above competing mean areal rainfall estimates are used as input to the Sacramento Soil Moisture Accounting Model. Preliminary experiments for the Leaf River Basin in Mississippi, for the period of March 2000 - June 2002, reveals that seasonality plays an important role in the comparison. There is an overestimation during the summer and underestimation during the winter in satellite-based rainfall with respect to the competing rainfall estimates. The consequence of this result on the hydrologic model is that simulated discharge underestimates the major observed peak discharges during early spring for the basin under study. Future research will entail developing correction procedures, which depend on different factors such as seasonality, geographic location and basin size, for satellite-based rainfall estimates over basins with dense rain gauge network and/or radar coverage. Extension of these correction procedures to satellite-based rainfall estimates over ungauged basins with similar characteristics has the potential for reducing the input uncertainty in ungauged basin modeling efforts.
Technique for estimating depth of floods in Tennessee
Gamble, C.R.
1983-01-01
Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)
Jewett, Ethan M.; Steinrücken, Matthias; Song, Yun S.
2016-01-01
Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright–Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. PMID:27550904
NASA Astrophysics Data System (ADS)
Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas
2017-04-01
We present a simulation of Antarctic iceberg drift and melting that includes small, medium-sized, and giant tabular icebergs with a realistic size distribution. For the first time, an iceberg model is initialized with a set of nearly 7000 observed iceberg positions and sizes around Antarctica. The study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. We simulate drift and lateral melt using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's bottom. Climatology estimates of Antarctic iceberg melting based on simulations of small (≤2.2 km), "small-to-medium-sized" (≤10 km), and small-to-giant icebergs (including icebergs >10 km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.
Zhu, Yue-Shan; Yang, Wan-Dong; Li, Xiu-Wen; Ni, Hong-Gang; Zeng, Hui
2018-02-01
The quality of indoor environments has a significant impact on public health. Usually, an indoor environment is treated as a static box, in which physicochemical reactions of indoor air contaminants are negligible. This results in conservative estimates for primary indoor air pollutant concentrations, while also ignoring secondary pollutants. Thus, understanding the relationship between indoor and outdoor particles and particle-bound pollutants is of great significance. For this reason, we collected simultaneous indoor and outdoor measurements of the size distribution of airborne brominated flame retardant (BFR) congeners. The time-dependent concentrations of indoor particles and particle-bound BFRs were then estimated with the mass balance model, accounting for the outdoor concentration, indoor source strength, infiltration, penetration, deposition and indoor resuspension. Based on qualitative observation, the size distributions of ΣPBDE and ΣHBCD were characterized by bimodal peaks. According to our results, particle-bound BDE209 and γ-HBCD underwent degradation. Regardless of the surface adsorption capability of particles and the physicochemical properties of the target compounds, the concentration of BFRs in particles of different size fractions seemed to be governed by the particle distribution. Based on our estimations, for airborne particles and particle-bound BFRs, a window-open ventilated room only takes a quarter of the time to reach an equilibrium between the concentration of pollutants inside and outside compared to a closed room. Unfortunately, indoor pollutants and outdoor pollutants always exist simultaneously, which poses a window-open-or-closed dilemma to achieve proper ventilation. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fedorov, N. I.; Mikhailenko, O. I.; Zharkikh, T. L.; Bakirova, R. T.
2018-01-01
Mapping of the vegetation (1:25000) of the Pre-Urals Steppe area at the Orenburg State Nature Reserve was completed in 2016. A map created with the geoinformation system contains 1931 simple and complex polygons for 25 types of vegetation. In a drought year, the average stock of palatable vegetation of the whole area is estimated at 8380 tons dry weight. The estimation is based on the size of areas covered by different types of vegetation, their grass production, the correction coefficients for decreasing of pasture forage stocks in winter and decreasing of production of grass communities in dry years. Based on pasture forage stocks the area could tolerate the maximum population size of 1769 individuals of the Przewalski horse, their average density could be 0.11 horse per ha. Yet, as watering places for animals are limited in Pre-Urals Steppe, grazing pressures on the vegetation next to the water sources may increase in dry years. That is why the above-mentioned calculated maximum population size and density must be reduced at least by half until some additional watering places are established and monitoring of the grazing effect on the vegetation next to the places is carried out regularly. Thus, the maximum size of the population is estimated at 800 to 900 individuals, which is almost 1.5 times more than necessary to establish a self-sustained population of the Przewalski horse.
Size distribution of magnetic iron oxide nanoparticles using Warren-Averbach XRD analysis
NASA Astrophysics Data System (ADS)
Mahadevan, S.; Behera, S. P.; Gnanaprakash, G.; Jayakumar, T.; Philip, J.; Rao, B. P. C.
2012-07-01
We use the Fourier transform based Warren-Averbach (WA) analysis to separate the contributions of X-ray diffraction (XRD) profile broadening due to crystallite size and microstrain for magnetic iron oxide nanoparticles. The profile shape of the column length distribution, obtained from WA analysis, is used to analyze the shape of the magnetic iron oxide nanoparticles. From the column length distribution, the crystallite size and its distribution are estimated for these nanoparticles which are compared with size distribution obtained from dynamic light scattering measurements. The crystallite size and size distribution of crystallites obtained from WA analysis are explained based on the experimental parameters employed in preparation of these magnetic iron oxide nanoparticles. The variation of volume weighted diameter (Dv, from WA analysis) with saturation magnetization (Ms) fits well to a core shell model wherein it is known that Ms=Mbulk(1-6g/Dv) with Mbulk as bulk magnetization of iron oxide and g as magnetic shell disorder thickness.
Hawkins, Robert C; Badrick, Tony
2015-08-01
In this study we aimed to compare the reporting unit size used by Australian laboratories for routine chemistry and haematology tests to the unit size used by learned authorities and in standard laboratory textbooks and to the justified unit size based on measurement uncertainty (MU) estimates from quality assurance program data. MU was determined from Royal College of Pathologists of Australasia (RCPA) - Australasian Association of Clinical Biochemists (AACB) and RCPA Haematology Quality Assurance Program survey reports. The reporting unit size implicitly suggested in authoritative textbooks, the RCPA Manual, and the General Serum Chemistry program itself was noted. We also used published data on Australian laboratory practices.The best performing laboratories could justify their chemistry unit size for 55% of analytes while comparable figures for the 50% and 90% laboratories were 14% and 8%, respectively. Reporting unit size was justifiable for all laboratories for red cell count, >50% for haemoglobin but only the top 10% for haematocrit. Few, if any, could justify their mean cell volume (MCV) and mean cell haemoglobin concentration (MCHC) reporting unit sizes.The reporting unit size used by many laboratories is not justified by present analytical performance. Using MU estimates to determine the reporting interval for quantitative laboratory results ensures reporting practices match local analytical performance and recognises the inherent error of the measurement process.