Sample records for small sample bias

  1. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    ERIC Educational Resources Information Center

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  3. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  4. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  5. Short-term memory for responses: the "choose-small" effect.

    PubMed Central

    Fetterman, J G; MacEwen, D

    1989-01-01

    Pigeons' short-term memory for fixed-ratio requirements was assessed using a delayed symbolic matching-to-sample procedure. Different choices were reinforced after fixed-ratio 10 and fixed-ratio 40 requirements, and delays of 0, 5, or 20 s were sometimes placed between sample ratios and choice. All birds made disproportionate numbers of responses to the small-ratio choice alternative when delays were interposed between ratios and choice, and this bias increased as a function of delay. Preference for the small fixed-ratio alternative was also observed on "no-sample" trials, during which the choice alternatives were presented without a prior sample ratio. This "choose-small" bias is analogous to results obtained by Spetch and Wilkie (1983) with event duration as the discriminative stimulus. The choose-small bias was attenuated when the houselight was turned on during delays, but overall accuracy was not influenced systematically by the houselight manipulation. PMID:2584917

  6. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    PubMed

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  7. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  8. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  9. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  10. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    PubMed

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  11. The heterogeneity statistic I(2) can be biased in small meta-analyses.

    PubMed

    von Hippel, Paul T

    2015-04-14

    Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I(2). We calculate the bias of I(2), focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. We use Mathematica software to calculate the expectation and bias of I(2). I(2) has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I(2) will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I(2) can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I(2) estimate is 21 percent. The point estimate I(2) should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I(2).

  12. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  13. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    PubMed

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  14. Bias in fallout data from nuclear surface shot SMALL BOY: an evaluation of sample perturbation by sieve sizing. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascual, J.N.

    1967-06-26

    Evaluation of sample bias introduced by the mechanical sieving of Small Boy fallout samples for 10 minutes revealed the following: Up to 20% of the mass and 30% of the gamma-ray activity can be lost from the large-particle (greater than 1400 microns) fraction. The pan fraction (less than 44 microns) can gain in weight by as much as 79%, and in activity by as much as 44%. The gamma-ray spectra of the fractions were not noticeably altered by the process. Examination of unbiased pan fractions (before mechanical sieving) indicated bimodality of the mass-size distribution in a sample collected 9,200 feetmore » from ground zero, but not in a sample collected at 13,300 feet.« less

  15. GRADE guidelines: 5. Rating the quality of evidence--publication bias.

    PubMed

    Guyatt, Gordon H; Oxman, Andrew D; Montori, Victor; Vist, Gunn; Kunz, Regina; Brozek, Jan; Alonso-Coello, Pablo; Djulbegovic, Ben; Atkins, David; Falck-Ytter, Yngve; Williams, John W; Meerpohl, Joerg; Norris, Susan L; Akl, Elie A; Schünemann, Holger J

    2011-12-01

    In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias. Even when individual studies included in best-evidence summaries have a low risk of bias, publication bias can result in substantial overestimates of effect. Authors should suspect publication bias when available evidence comes from a number of small studies, most of which have been commercially funded. A number of approaches based on examination of the pattern of data are available to help assess publication bias. The most popular of these is the funnel plot; all, however, have substantial limitations. Publication bias is likely frequent, and caution in the face of early results, particularly with small sample size and number of events, is warranted. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  17. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  18. Is First-Order Vector Autoregressive Model Optimal for fMRI Data?

    PubMed

    Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain

    2015-09-01

    We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.

  19. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  20. Modeling bias and variation in the stochastic processes of small RNA sequencing

    PubMed Central

    Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-01-01

    Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495

  1. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scolnic, D.; Kessler, R., E-mail: dscolnic@kicp.uchicago.edu, E-mail: kessler@kicp.uchicago.edu

    Simulations of Type Ia supernovae (SNe Ia) surveys are a critical tool for correcting biases in the analysis of SNe Ia to infer cosmological parameters. Large-scale Monte Carlo simulations include a thorough treatment of observation history, measurement noise, intrinsic scatter models, and selection effects. In this Letter, we improve simulations with a robust technique to evaluate the underlying populations of SN Ia color and stretch that correlate with luminosity. In typical analyses, the standardized SN Ia brightness is determined from linear “Tripp” relations between the light curve color and luminosity and between stretch and luminosity. However, this solution produces Hubblemore » residual biases because intrinsic scatter and measurement noise result in measured color and stretch values that do not follow the Tripp relation. We find a 10 σ bias (up to 0.3 mag) in Hubble residuals versus color and 5 σ bias (up to 0.2 mag) in Hubble residuals versus stretch in a joint sample of 920 spectroscopically confirmed SN Ia from PS1, SNLS, SDSS, and several low- z surveys. After we determine the underlying color and stretch distributions, we use simulations to predict and correct the biases in the data. We show that removing these biases has a small impact on the low- z sample, but reduces the intrinsic scatter σ {sub int} from 0.101 to 0.083 in the combined PS1, SNLS, and SDSS sample. Past estimates of the underlying populations were too broad, leading to a small bias in the equation of state of dark energy w of Δ w = 0.005.« less

  3. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    PubMed

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  4. Evolutionary Trends and the Salience Bias (with Apologies to Oil Tankers, Karl Marx, and Others).

    ERIC Educational Resources Information Center

    McShea, Daniel W.

    1994-01-01

    Examines evolutionary trends, specifically trends in size, complexity, and fitness. Notes that documentation of these trends consists of either long lists of cases, or descriptions of a small number of salient cases. Proposes the use of random samples to avoid this "saliency bias." (SR)

  5. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    PubMed

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Influence of small DC bias field on the electrical behaviour of Sr- and Mg-doped lanthanum gallate

    NASA Astrophysics Data System (ADS)

    Raghvendra; Singh, Rajesh Kumar; Singh, Prabhakar

    2014-09-01

    One of the promising electrolyte materials for solid oxide fuel cells application, Sr- and Mg-doped lanthanum gallate La0.9Sr0.1Ga0.8Mg0.2O3-δ (LSGM), is synthesized by conventional solid state ceramic route. X-ray Rietveld analysis confirms the formation of main orthorhombic phase at room temperature along with a few minor secondary phases. SEM micrograph reveals the grain and grainboundary morphology of the system. Electrical conductivity of the LSGM sample is measured in the temperature range 573-873 K and in the frequency range 20 Hz-1 MHz at a few small DC bias fields (at 0.0, 0.5, 1.0, 1.5 and 2.0 V). The conductivity spectra show power-law behaviour. Electrical conductivity of the sample is found to be weakly dependent on DC bias field. This is attributed to field-dependent bulk and grainboundary conduction processes. In the present system, under investigated bias field range, the possibility of formation of Schottky barrier is ruled out. The concept of grainboundary channel (pathway) modulation on the application of bias field is proposed.

  7. Biases in the OSSOS Detection of Large Semimajor Axis Trans-Neptunian Objects

    NASA Astrophysics Data System (ADS)

    Gladman, Brett; Shankman, Cory; OSSOS Collaboration

    2017-10-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada-France-Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  8. OSSOS. VI. Striking Biases in the Detection of Large Semimajor Axis Trans-Neptunian Objects

    NASA Astrophysics Data System (ADS)

    Shankman, Cory; Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett J.; Lawler, Samantha M.; Chen, Ying-Tung; Jakubik, Marian; Kaib, Nathan; Alexandersen, Mike; Gwyn, Stephen D. J.; Petit, Jean-Marc; Volk, Kathryn

    2017-08-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada–France–Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  9. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  10. Small-Sample DIF Estimation Using Log-Linear Smoothing: A SIBTEST Application. Research Report. ETS RR-07-10

    ERIC Educational Resources Information Center

    Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J.

    2007-01-01

    The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…

  11. Meta-assessment of bias in science

    PubMed Central

    Fanelli, Daniele; Costas, Rodrigo; Ioannidis, John P. A.

    2017-01-01

    Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis. PMID:28320937

  12. Accuracy and differential bias in copy number measurement of CCL3L1 in association studies with three auto-immune disorders.

    PubMed

    Carpenter, Danielle; Walker, Susan; Prescott, Natalie; Schalkwijk, Joost; Armour, John Al

    2011-08-18

    Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion.

  13. Accuracy and differential bias in copy number measurement of CCL3L1 in association studies with three auto-immune disorders

    PubMed Central

    2011-01-01

    Background Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. Results We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Conclusions Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion. PMID:21851606

  14. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  15. A Procedure to Detect Item Bias Present Simultaneously in Several Items

    DTIC Science & Technology

    1991-04-25

    exhibit a coherent and major biasing influence at the test level. In partic- ular, this can be true even if each individual item displays only a minor...response functions (IRFs) without the use of item parameter estimation algorithms when the sample size is too small for their use. Thissen, Steinberg...convention). A random sample of examinees is drawn from each group, and a test of N items is administered to them. Typically it is suspected that a

  16. Evaluation of respondent-driven sampling.

    PubMed

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.

  17. Integrating sphere based reflectance measurements for small-area semiconductor samples

    NASA Astrophysics Data System (ADS)

    Saylan, S.; Howells, C. T.; Dahlem, M. S.

    2018-05-01

    This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.

  18. Race-Related Cognitive Test Bias in the ACTIVE Study: A MIMIC Model Approach

    PubMed Central

    Aiken Morgan, Adrienne T.; Marsiske, Michael; Dzierzewski, Joseph; Jones, Richard N.; Whitfield, Keith E.; Johnson, Kathy E.; Cresci, Mary K.

    2010-01-01

    The present study investigated evidence for race-related test bias in cognitive measures used in the baseline assessment of the ACTIVE clinical trial. Test bias against African Americans has been documented in both cognitive aging and early lifespan studies. Despite significant mean performance differences, Multiple Indicators Multiple Causes (MIMIC) models suggested most differences were at the construct level. There was little evidence that specific measures put either group at particular advantage or disadvantage and little evidence of cognitive test bias in this sample. Small group differences in education, cognitive status, and health suggest positive selection may have attenuated possible biases. PMID:20845121

  19. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  20. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  1. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Current-driven non-linear magnetodynamics in exchange-biased spin valves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seinige, Heidi; Wang, Cheng; Tsoi, Maxim, E-mail: tsoi@physics.utexas.edu

    2015-05-07

    This work investigates the excitation of parametric resonance in exchange-biased spin valves (EBSVs). Using a mechanical point contact, high density dc and microwave currents were injected into the EBSV sample. Observing the reflected microwave power and the small rectification voltage that develops across the contact allows detecting the current-driven magnetodynamics not only in the bulk sample but originating exclusively from the small contact region. In addition to ferromagnetic resonance (FMR), parametric resonance at twice the natural FMR frequency was observed. In contrast to FMR, this non-linear resonance was excited only in the vicinity of the point contact where current densitiesmore » are high. Power-dependent measurements displayed a typical threshold-like behavior of parametric resonance and a broadening of the instability region with increasing power. Parametric resonance showed a linear shift as a function of applied dc bias which is consistent with the field-like spin-transfer torque induced by current on magnetic moments in EBSV.« less

  3. Onsite Gaseous Centrifuge Enrichment Plant UF6 Cylinder Destructive Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anheier, Norman C.; Cannon, Bret D.; Qiao, Hong

    2012-07-17

    The IAEA safeguards approach for gaseous centrifuge enrichment plants (GCEPs) includes measurements of gross, partial, and bias defects in a statistical sampling plan. These safeguard methods consist principally of mass and enrichment nondestructive assay (NDA) verification. Destructive assay (DA) samples are collected from a limited number of cylinders for high precision offsite mass spectrometer analysis. DA is typically used to quantify bias defects in the GCEP material balance. Under current safeguards measures, the operator collects a DA sample from a sample tap following homogenization. The sample is collected in a small UF6 sample bottle, then sealed and shipped under IAEAmore » chain of custody to an offsite analytical laboratory. Current practice is expensive and resource intensive. We propose a new and novel approach for performing onsite gaseous UF6 DA analysis that provides rapid and accurate assessment of enrichment bias defects. DA samples are collected using a custom sampling device attached to a conventional sample tap. A few micrograms of gaseous UF6 is chemically adsorbed onto a sampling coupon in a matter of minutes. The collected DA sample is then analyzed onsite using Laser Ablation Absorption Ratio Spectrometry-Destructive Assay (LAARS-DA). DA results are determined in a matter of minutes at sufficient accuracy to support reliable bias defect conclusions, while greatly reducing DA sample volume, analysis time, and cost.« less

  4. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    PubMed

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  5. Standardized mean differences cause funnel plot distortion in publication bias assessments

    PubMed Central

    Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R

    2017-01-01

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685

  6. Eight Year Climatologies from Observational (AIRS) and Model (MERRA) Data

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas; Savtchenko, Andrey; Won, Young-In; Theobalk, Mike; Vollmer, Bruce; Manning, Evan; Smith, Peter; Ostrenga, Dana; Leptoukh, Greg

    2010-01-01

    We examine climatologies derived from eight years of temperature, water vapor, cloud, and trace gas observations made by the Atmospheric Infrared Sounder (AIRS) instrument flying on the Aqua satellite and compare them to similar climatologies constructed with data from a global assimilation model, the Modern Era Retrospective-Analysis for Research and Applications (MERRA). We use the AIRS climatologies to examine anomalies and trends in the AIRS data record. Since sampling can be an issue for infrared satellites in low earth orbit, we also use the MERRA data to examine the AIRS sampling biases. By sampling the MERRA data at the AIRS space-time locations both with and without the AIRS quality control we estimate the sampling bias of the AIRS climatology and the atmospheric conditions where AIRS has a lower sampling rate. While the AIRS temperature and water vapor sampling biases are small at low latitudes, they can be more than a few degrees in temperature or 10 percent in water vapor at higher latitudes. The largest sampling biases are over desert. The AIRS and MERRA data are available from the Goddard Earth Sciences Data and Information Services Center (GES DISC). The AIRS climatologies we used are available for analysis with the GIOVANNI data exploration tool. (see, http://disc.gsfc.nasa.gov).

  7. Microcoulometric measurement of water in minerals

    USGS Publications Warehouse

    Cremer, M.; Elsheimer, H.N.; Escher, E.E.

    1972-01-01

    A DuPont Moisture Analyzer is used in a microcoulometric method for determining water in minerals. Certain modifications, which include the heating of the sample outside the instrument, protect the system from acid gases and insure the conversion of all hydrogen to water vapor. Moisture analyzer data are compared to concurrent data obtained by a modified Penfield method. In general, there is a positive bias of from 0.1 to 0.2% in the moisture analyzer results and a similarity of bias in minerals of the same kind. Inhomogeneity, sample size, and moisture pick-up are invoked to explain deviations. The method is particularly applicable to small samples. ?? 1972.

  8. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  9. Evaluation of bias and logistics in a survey of adults at increased risk for oral health decrements.

    PubMed

    Gilbert, G H; Duncan, R P; Kulley, A M; Coward, R T; Heft, M W

    1997-01-01

    Designing research to include sufficient respondents in groups at highest risk for oral health decrements can present unique challenges. Our purpose was to evaluate bias and logistics in this survey of adults at increased risk for oral health decrements. We used a telephone survey methodology that employed both listed numbers and random digit dialing to identify dentate persons 45 years old or older and to oversample blacks, poor persons, and residents of nonmetropolitan counties. At a second stage, a subsample of the respondents to the initial telephone screening was selected for further study, which consisted of a baseline in-person interview and a clinical examination. We assessed bias due to: (1) limiting the sample to households with telephones, (2) using predominantly listed numbers instead of random digit dialing, and (3) nonresponse at two stages of data collection. While this approach apparently created some biases in the sample, they were small in magnitude. Specifically, limiting the sample to households with telephones biased the sample overall toward more females, larger households, and fewer functionally impaired persons. Using predominantly listed numbers led to a modest bias toward selection of persons more likely to be younger, healthier, female, have had a recent dental visit, and reside in smaller households. Blacks who were selected randomly at a second stage were more likely to participate in baseline data gathering than their white counterparts. Comparisons of the data obtained in this survey with those from recent national surveys suggest that this methodology for sampling high-risk groups did not substantively bias the sample with respect to two important dental parameters, prevalence of edentulousness and dental care use, nor were conclusions about multivariate associations with dental care recency substantively affected. This method of sampling persons at high risk for oral health decrements resulted in only modest bias with respect to the population of interest.

  10. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    PubMed Central

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233

  11. Small-Sample DIF Estimation Using SIBTEST, Cochran's Z, and Log-Linear Smoothing

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Li, Hongli

    2013-01-01

    Minimum sample sizes of about 200 to 250 per group are often recommended for differential item functioning (DIF) analyses. However, there are times when sample sizes for one or both groups of interest are smaller than 200 due to practical constraints. This study attempts to examine the performance of Simultaneous Item Bias Test (SIBTEST),…

  12. Evaluation of Respondent-Driven Sampling

    PubMed Central

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309

  13. Sampling bias in climate-conflict research

    NASA Astrophysics Data System (ADS)

    Adams, Courtland; Ide, Tobias; Barnett, Jon; Detges, Adrien

    2018-03-01

    Critics have argued that the evidence of an association between climate change and conflict is flawed because the research relies on a dependent variable sampling strategy1-4. Similarly, it has been hypothesized that convenience of access biases the sample of cases studied (the `streetlight effect'5). This also gives rise to claims that the climate-conflict literature stigmatizes some places as being more `naturally' violent6-8. Yet there has been no proof of such sampling patterns. Here we test whether climate-conflict research is based on such a biased sample through a systematic review of the literature. We demonstrate that research on climate change and violent conflict suffers from a streetlight effect. Further, studies which focus on a small number of cases in particular are strongly informed by cases where there has been conflict, do not sample on the independent variables (climate impact or risk), and hence tend to find some association between these two variables. These biases mean that research on climate change and conflict primarily focuses on a few accessible regions, overstates the links between both phenomena and cannot explain peaceful outcomes from climate change. This could result in maladaptive responses in those places that are stigmatized as being inherently more prone to climate-induced violence.

  14. Spin relaxation measurements of electrostatic bias in intermolecular exploration

    NASA Astrophysics Data System (ADS)

    Teng, Ching-Ling; Bryant, Robert G.

    2006-04-01

    We utilize the paramagnetic contribution to proton spin-lattice relaxation rate constants induced by freely diffusing charged paramagnetic centers to investigate the effect of charge on the intermolecular exploration of a protein by the small molecule. The proton NMR spectrum provided 255 resolved resonances that report how the explorer molecule local concentration varies with position on the surface. The measurements integrate over local dielectric constant variations, and, in principle, provide an experimental characterization of the surface free energy sampling biases introduced by the charge distribution on the protein. The experimental results for ribonuclease A obtained using positive, neutral, and negatively charged small nitroxide radicals are qualitatively similar to those expected from electrostatic calculations. However, while systematic electrostatic trends are apparent, the three different combinations of the data sets do not yield internally consistent values for the electrostatic contribution to the intermolecular free energy. We attribute this failure to the weakness of the electrostatic sampling bias for charged nitroxides in water and local variations in effective translational diffusion constant at the water-protein interface, which enters the nuclear spin relaxation equations for the nitroxide-proton dipolar coupling.

  15. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  16. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  17. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  18. Constructing a multidimensional free energy surface like a spider weaving a web.

    PubMed

    Chen, Changjun

    2017-10-15

    Complete free energy surface in the collective variable space provides important information of the reaction mechanisms of the molecules. But, sufficient sampling in the collective variable space is not easy. The space expands quickly with the number of the collective variables. To solve the problem, many methods utilize artificial biasing potentials to flatten out the original free energy surface of the molecule in the simulation. Their performances are sensitive to the definitions of the biasing potentials. Fast-growing biasing potential accelerates the sampling speed but decreases the accuracy of the free energy result. Slow-growing biasing potential gives an optimized result but needs more simulation time. In this article, we propose an alternative method. It adds the biasing potential to a representative point of the molecule in the collective variable space to improve the conformational sampling. And the free energy surface is calculated from the free energy gradient in the constrained simulation, not given by the negative of the biasing potential as previous methods. So the presented method does not require the biasing potential to remove all the barriers and basins on the free energy surface exactly. Practical applications show that the method in this work is able to produce the accurate free energy surfaces for different molecules in a short time period. The free energy errors are small in the cases of various biasing potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Intrinsic scatter of caustic masses and hydrostatic bias: An observational study

    NASA Astrophysics Data System (ADS)

    Andreon, S.; Trinchieri, G.; Moretti, A.; Wang, J.

    2017-10-01

    All estimates of cluster mass have some intrinsic scatter and perhaps some bias with true mass even in the absence of measurement errors for example caused by cluster triaxiality and large scale structure. Knowledge of the bias and scatter values is fundamental for both cluster cosmology and astrophysics. In this paper we show that the intrinsic scatter of a mass proxy can be constrained by measurements of the gas fraction because masses with higher values of intrinsic scatter with true mass produce more scattered gas fractions. Moreover, the relative bias of two mass estimates can be constrained by comparing the mean gas fraction at the same (nominal) cluster mass. Our observational study addresses the scatter between caustic (I.e., dynamically estimated) and true masses, and the relative bias of caustic and hydrostatic masses. For these purposes, we used the X-ray Unbiased Cluster Sample, a cluster sample selected independently from the intracluster medium content with reliable masses: 34 galaxy clusters in the nearby (0.050 < z < 0.135) Universe, mostly with 14 < log M500/M⊙ ≲ 14.5, and with caustic masses. We found a 35% scatter between caustic and true masses. Furthermore, we found that the relative bias between caustic and hydrostatic masses is small, 0.06 ± 0.05 dex, improving upon past measurements. The small scatter found confirms our previous measurements of a highly variable amount of feedback from cluster to cluster, which is the cause of the observed large variety of core-excised X-ray luminosities and gas masses.

  20. Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma

    NASA Technical Reports Server (NTRS)

    Fisher, Brad L.

    2004-01-01

    The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.

  1. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. External quality-assurance results for the National Atmospheric Deposition Program/National Trends Network, 1995-96

    USGS Publications Warehouse

    Gordon, John D.

    1999-01-01

    The U.S. Geological Survey operated four external quality-assurance programs for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) in 1995 and 1996: the intersite-comparison program, the blind-audit program, the interlaboratory- comparison program, and the collocated-sampler program. The intersite-comparison program assessed the precision and bias of pH and specific-conductance determinations made by NADP/NTN site operators. The analytical bias introduced during routine handling, processing, and shipping of wet-deposition samples and precision of analyte values was estimated using a blind-audit program. An interlaboratory-comparison program was used to evaluate differences between analytical results and to estimate the analytical precision of five North American laboratories that routinely analyzed wet deposition. A collocated-sampler program estimated the precision of the overall precipitation collection and analysis system from initial sample collection through final storage of the data. Results of two intersite-comparison studies completed in 1995 indicated 94.6 and 94.4 percent of the onsite pH determinations met the NADP/NTN accuracy goals, whereas 97.2 and 98.3 percent of the specific-conductance determinations were within the established limits. The percentages of onsite determinations that met the accuracy goals in 1996 were slightly less for both pH and specific-conductance than in 1995. In 1996, 93.2 and 87.5 percent of onsite pH determinations met the accuracy goals, whereas the percentage of onsite specific-conductance measurements that met the goals was 93.9 and 94.9 percent.The blind audit program utilizes a paired sample design to evaluate the effects of routine sample handling, processing and shipping on the chemistry of weekly precipitation samples. The portion of the blind audit sample subject to all of the normal onsite handling and processing steps of a regular weekly precipitation sample is referred to as the bucket portion, whereas the portion receiving only minimal handling is referred to as the bottle portion. Throughout the report, the term positive bias in regard to blind-audit results indicates that the bucket portion had a higher concentration than the bottle portion. The paired t-test of 1995 blind-audit data indicated that routine sample handling, processing, and shipping introduced a very small positive bias (a=0.05) for hydrogen ion and specific conductance and a slight negative bias (a =0.05) for ammonium and sodium. In 1995, the median paired differences between the bucket and bottle portions ranged from -0.02 milligram per liter for both ammonium and nitrate to +0.002 milligram per liter for calcium. Although the paired t-test indicated a very small positive bias for hydrogen ion, the median paired difference between the bucket and bottle portions was 0.00 microequivalents per liter, whereas for specific conductance, the median paired difference between the bucket and bottle portions was 0.200 microsiemens per centimeter in 1995. The paired t-test of blind-audit results in 1996 indicated statistically significant bias for 6 of the 10 analytes. Only chloride, nitrate, hydrogen ion, and specific conductance were not biased in 1996. However, the magnitude of the bias in 1996 was very small and only of limited importance from the viewpoint of an analytical chemist or data user. The median paired differences between the bucket and bottle portions ranged from -0.02 milligram per liter for both ammonium and chloride to +0.006 milligram per liter for calcium. For hydrogen ion, the median paired difference between the bucket and bottle portions was -0.357 microequivalent per liter; for specific conductance, the median paired difference between the bucket and bottle portions was 0.00 microsiemens per centimeter in 1996. Surface-chemistry effects due to different amounts of precipitation contacting the sample collection and shipping container surfac

  3. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  4. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  5. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  6. Estimating relative risks in multicenter studies with a small number of centers - which methods to use? A simulation study.

    PubMed

    Pedroza, Claudia; Truong, Van Thi Thanh

    2017-11-02

    Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.

  7. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    ERIC Educational Resources Information Center

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  8. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  9. A systematic review and meta-analysis of cognitive bias to food stimuli in people with disordered eating behaviour.

    PubMed

    Brooks, Samantha; Prince, Alexis; Stahl, Daniel; Campbell, Iain C; Treasure, Janet

    2011-02-01

    Maladaptive cognitions about food, weight and shape bias attention, memory and judgment and may be linked to disordered eating behaviour. This paper reviews information processing of food stimuli (words, pictures) in people with eating disorders (ED). PubMed, Ovid, ScienceDirect, PsychInfo, Web of Science, Cochrane Library and Google Scholar were searched to December 2009. 63 studies measured attention, memory and judgment bias towards food stimuli in women with ED. Stroop tasks had sufficient sample size for a meta-analyses and effects ranged from small to medium. Other studies of attention bias had variable effects (e.g. the Dot-Probe task, distracter tasks and Startle Eyeblink Modulation). A meta-analysis of memory bias studies in ED and RE yielded insignificant effect. Effect sizes for judgment bias ranged from negligible to large. People with ED have greater attentional bias to food stimuli than healthy controls (HC). Evidence for a memory and judgment bias in ED is limited. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Surveillance Bias in Child Maltreatment: A Tempest in a Teapot.

    PubMed

    Drake, Brett; Jonson-Reid, Melissa; Kim, Hyunil

    2017-08-28

    Background: Children are believed to be more likely to be reported for maltreatment while they are working with mental health or social service professionals. This "surveillance bias" has been claimed to inflate reporting by fifty percent or more, and has been used to explain why interventions such as home visiting fail to reduce official maltreatment reporting rates. Methods: We use national child abuse reporting data ( n = 825,763), supplemented by more detailed regional data from a multi-agency administrative data study ( n = 7185). We determine the percentage of all re-reports made uniquely by mental health and social service providers within and across generations, the report sources which could be subject to surveillance bias. Results: At three years after the initial Child protective services (CPS) report, the total percentage of national reports uniquely made by mental health or social service providers is less than 10%, making it impossible that surveillance bias could massively inflate CPS reporting in this sample. Analysis of national data find evidence of a very small (+4.54%) initial surveillance bias "bump" among served cases which decays to +1.84% within three years. Our analysis of regional data showed similar or weaker effects. Conclusions : Surveillance bias effects appear to exist, but are very small.

  11. Intellectual Abilities in a Large Sample of Children with Velo-Cardio-Facial Syndrome: An Update

    ERIC Educational Resources Information Center

    De Smedt, Bert; Devriendt, K.; Fryns, J. -P.; Vogels, A.; Gewillig, M.; Swillen, A.

    2007-01-01

    Background: Learning disabilities are one of the most consistently reported features in Velo-Cardio-Facial Syndrome (VCFS). Earlier reports on IQ in children with VCFS were, however, limited by small sample sizes and ascertainment biases. The aim of the present study was therefore to replicate these earlier findings and to investigate intellectual…

  12. Enhanced exchange bias in MnN/CoFe bilayers after high-temperature annealing

    NASA Astrophysics Data System (ADS)

    Dunz, M.; Schmalhorst, J.; Meinert, M.

    2018-05-01

    We report an exchange bias of more than 2700 Oe at room temperature in MnN/CoFe bilayers after high-temperature annealing. We studied the dependence of exchange bias on the annealing temperature for different MnN thicknesses in detail and found that samples with tMnN > 32nm show an increase of exchange bias for annealing temperatures higher than TA = 400 °C. Maximum exchange bias values exceeding 2000 Oe with reasonably small coercive fields around 600 Oe are achieved for tMnN = 42, 48 nm. The median blocking temperature of those systems is determined to be 180 °C after initial annealing at TA = 525 °C. X-ray diffraction measurements and Auger depth profiling show that the large increase of exchange bias after high-temperature annealing is accompanied by strong nitrogen diffusion into the Ta buffer layer of the stacks.

  13. Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential

    NASA Astrophysics Data System (ADS)

    Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste

    We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.

  14. A hard-to-read font reduces the framing effect in a large sample.

    PubMed

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  15. Can mindfulness-based interventions influence cognitive functioning in older adults? A review and considerations for future research.

    PubMed

    Berk, Lotte; van Boxtel, Martin; van Os, Jim

    2017-11-01

    An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.

  16. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  17. Visual search attentional bias modification reduced social phobia in adolescents.

    PubMed

    De Voogd, E L; Wiers, R W; Prins, P J M; Salemink, E

    2014-06-01

    An attentional bias for negative information plays an important role in the development and maintenance of (social) anxiety and depression, which are highly prevalent in adolescence. Attention Bias Modification (ABM) might be an interesting tool in the prevention of emotional disorders. The current study investigated whether visual search ABM might affect attentional bias and emotional functioning in adolescents. A visual search task was used as a training paradigm; participants (n = 16 adolescents, aged 13-16) had to repeatedly identify the only smiling face in a 4 × 4 matrix of negative emotional faces, while participants in the control condition (n = 16) were randomly allocated to one of three placebo training versions. An assessment version of the task was developed to directly test whether attentional bias changed due to the training. Self-reported anxiety and depressive symptoms and self-esteem were measured pre- and post-training. After two sessions of training, the ABM group showed a significant decrease in attentional bias for negative information and self-reported social phobia, while the control group did not. There were no effects of training on depressive mood or self-esteem. No correlation between attentional bias and social phobia was found, which raises questions about the validity of the attentional bias assessment task. Also, the small sample size precludes strong conclusions. Visual search ABM might be beneficial in changing attentional bias and social phobia in adolescents, but further research with larger sample sizes and longer follow-up is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification.

    PubMed

    Jiang, Wenyu; Simon, Richard

    2007-12-20

    This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. A prototype splitter apparatus for dividing large catches of small fish

    USGS Publications Warehouse

    Stapanian, Martin A.; Edwards, William H.

    2012-01-01

    Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.

  20. Anticipation or ascertainment bias in schizophrenia? Penrose`s familial mental illness sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, A.S.; Husted, J.

    Several studies have observed anticipation (earlier age at onset [AAO] in successive generations) in familial schizophrenia. However, whether true anticipation or ascertainment bias is the principal originating mechanism remains unclear. In 1944 L.S. Penrose collected AAO data on a large, representative sample of familial mental illness, using a broad ascertainment strategy. These data allowed examination of anticipation and ascertainment biases in five two-generation samples of affected relative pairs. The median intergenerational difference (MID) in AAO was used to assess anticipation. Results showed significant anticipation in parent-offspring pairs with schizophrenia (n = 137 pairs; MID 15 years; P = .0001) andmore » in a positive control sample with Huntington disease (n = 11; P = .01). Broadening the diagnosis of the schizophrenia sample suggested anticipation of severity of illness. However, other analyses provided evidence for ascertainment bias, especially in later-AAO parents, in parent-offspring pairs. Aunt/uncle-niece/nephew schizophrenia pairs showed anticipation (n = 111; P = .0001), but the MID was 8 years and aunts/uncles had earlier median AAO than parents. Anticipation effects were greatest in pairs with late-AAO parents but remained significant in a subgroup of schizophrenia pairs with early parental AAO (n = 31; P = .03). A small control sample of other diseases had MID of 5 years but no significant anticipation (n = 9; F = .38). These results suggest that, although ascertainment-bias effects were observed in parent-offspring pairs, true anticipation appears to be inherent in the transmission of familial schizophrenia. The findings support investigations of unstable mutations and other mechanisms that may contribute to true anticipation in schizophrenia. 37 refs., 2 tabs.« less

  1. Bias of health estimates obtained from chronic disease and risk factor surveillance systems using telephone population surveys in Australia: results from a representative face-to-face survey in Australia from 2010 to 2013.

    PubMed

    Dal Grande, Eleonora; Chittleborough, Catherine R; Campostrini, Stefano; Taylor, Anne W

    2016-04-18

    Emerging communication technologies have had an impact on population-based telephone surveys worldwide. Our objective was to examine the potential biases of health estimates in South Australia, a state of Australia, obtained via current landline telephone survey methodologies and to report on the impact of mobile-only household on household surveys. Data from an annual multi-stage, systematic, clustered area, face-to-face population survey, Health Omnibus Survey (approximately 3000 interviews annually), included questions about telephone ownership to assess the population that were non-contactable by current telephone sampling methods (2006 to 2013). Univariable analyses (2010 to 2013) and trend analyses were conducted for sociodemographic and health indicator variables in relation to telephone status. Relative coverage biases (RCB) of two hypothetical telephone samples was undertaken by examining the prevalence estimates of health status and health risk behaviours (2010 to 2013): directory-listed numbers, consisting mainly of landline telephone numbers and a small proportion of mobile telephone numbers; and a random digit dialling (RDD) sample of landline telephone numbers which excludes mobile-only households. Telephone (landline and mobile) coverage in South Australia is very high (97%). Mobile telephone ownership increased slightly (7.4%), rising from 89.7% in 2006 to 96.3% in 2013; mobile-only households increased by 431% over the eight year period from 5.2% in 2006 to 27.6% in 2013. Only half of the households have either a mobile or landline number listed in the telephone directory. There were small differences in the prevalence estimates for current asthma, arthritis, diabetes and obesity between the hypothetical telephone samples and the overall sample. However, prevalence estimate for diabetes was slightly underestimated (RCB value of -0.077) in 2013. Mixed RCB results were found for having a mental health condition for both telephone samples. Current smoking prevalence was lower for both hypothetical telephone samples in absolute differences and RCB values: -0.136 to -0.191 for RDD landline samples and -0.129 to -0.313 for directory-listed samples. These findings suggest landline-based sampling frames used in Australia, when appropriately weighted, produce reliable representative estimates for some health indicators but not for all. Researchers need to be aware of their limitations and potential biased estimates.

  2. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  3. Comparing four methods to estimate usual intake distributions.

    PubMed

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P

    2011-07-01

    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.

  4. Drone Transport of Chemistry and Hematology Samples Over Long Distances.

    PubMed

    Amukele, Timothy K; Hernandez, James; Snozek, Christine L H; Wyatt, Ryan G; Douglas, Matthew; Amini, Richard; Street, Jeff

    2017-11-02

    We addressed the stability of biological samples in prolonged drone flights by obtaining paired chemistry and hematology samples from 21 adult volunteers in a single phlebotomy event-84 samples total. Half of the samples were held stationary, while the other samples were flown for 3 hours (258 km) in a custom active cooling box mounted on the drone. After the flight, 19 chemistry and hematology tests were performed. Seventeen analytes had small or no bias, but glucose and potassium in flown samples showed an 8% and 6.2% bias, respectively. The flown samples (mean, 24.8°C) were a mean of 2.5°C cooler than the stationary samples (mean, 27.3°C) during transportation to the flight field as well as during the flight. The changes in glucose and potassium are consistent with the magnitude and duration of the temperature difference between the flown and stationary samples. Long drone flights of biological samples are feasible but require stringent environmental controls to ensure consistent results. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  5. Large Area Crop Inventory Experiment (LACIE). Review of LACIE methodology, a project evaluation of technical acceptability

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The author has identified the following significant results. Results indicated that the LANDSAT data and the classification technology can estimate the small grains area within a sample segment accurately and reliably enough to meet the LACIE goals. Overall, the LACIE estimates in a 9 x 11 kilometer segment agree well with ground and aircraft determined area within these segments. The estimated c.v. of the random classification error was acceptably small. These analyses confirmed that bias introduced by various factors, such as LANDSAT spatial resolution, lack of spectral resolution, classifier bias, and repeatability, was not excessive in terms of the required performance criterion. Results of these tests did indicate a difficulty in differentiating wheat from other closely related small grains. However, satisfactory wheat area estimates were obtained through the reduction of the small grain area estimates in accordance with relative amounts of these crops as determined from historic data; these procedures are being further refined.

  6. Delayed reward discounting and addictive behavior: a meta-analysis.

    PubMed

    MacKillop, James; Amlung, Michael T; Few, Lauren R; Ray, Lara A; Sweet, Lawrence H; Munafò, Marcus R

    2011-08-01

    Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). From the total comparisons identified, a small magnitude effect was evident (d= .15; p< .00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d= .58; p< .00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d= .61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed.

  7. Delayed reward discounting and addictive behavior: a meta-analysis

    PubMed Central

    Amlung, Michael T.; Few, Lauren R.; Ray, Lara A.; Sweet, Lawrence H.; Munafò, Marcus R.

    2011-01-01

    Rationale Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. Objectives The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Methods Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). Results From the total comparisons identified, a small magnitude effect was evident (d=.15; p<.00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d=.58; p<.00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d=.61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. Conclusions These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed. PMID:21373791

  8. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.

  9. A novel ultra high-throughput 16S rRNA gene amplicon sequencing library preparation method for the Illumina HiSeq platform.

    PubMed

    de Muinck, Eric J; Trosvik, Pål; Gilfillan, Gregor D; Hov, Johannes R; Sundaram, Arvind Y M

    2017-07-06

    Advances in sequencing technologies and bioinformatics have made the analysis of microbial communities almost routine. Nonetheless, the need remains to improve on the techniques used for gathering such data, including increasing throughput while lowering cost and benchmarking the techniques so that potential sources of bias can be better characterized. We present a triple-index amplicon sequencing strategy to sequence large numbers of samples at significantly lower c ost and in a shorter timeframe compared to existing methods. The design employs a two-stage PCR protocol, incorpo rating three barcodes to each sample, with the possibility to add a fourth-index. It also includes heterogeneity spacers to overcome low complexity issues faced when sequencing amplicons on Illumina platforms. The library preparation method was extensively benchmarked through analysis of a mock community in order to assess biases introduced by sample indexing, number of PCR cycles, and template concentration. We further evaluated the method through re-sequencing of a standardized environmental sample. Finally, we evaluated our protocol on a set of fecal samples from a small cohort of healthy adults, demonstrating good performance in a realistic experimental setting. Between-sample variation was mainly related to batch effects, such as DNA extraction, while sample indexing was also a significant source of bias. PCR cycle number strongly influenced chimera formation and affected relative abundance estimates of species with high GC content. Libraries were sequenced using the Illumina HiSeq and MiSeq platforms to demonstrate that this protocol is highly scalable to sequence thousands of samples at a very low cost. Here, we provide the most comprehensive study of performance and bias inherent to a 16S rRNA gene amplicon sequencing method to date. Triple-indexing greatly reduces the number of long custom DNA oligos required for library preparation, while the inclusion of variable length heterogeneity spacers minimizes the need for PhiX spike-in. This design results in a significant cost reduction of highly multiplexed amplicon sequencing. The biases we characterize highlight the need for highly standardized protocols. Reassuringly, we find that the biological signal is a far stronger structuring factor than the various sources of bias.

  10. Set size and culture influence children's attention to number.

    PubMed

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Texas Adolescent Tobacco and Marketing Surveillance System’s Design

    PubMed Central

    Pérez, Adriana; Harrell, Melissa B.; Malkani, Raja I.; Jackson, Christian D.; Delk, Joanne; Allotey, Prince A.; Matthews, Krystin J.; Martinez, Pablo; Perry, Cheryl L.

    2017-01-01

    Objectives To provide a full methodological description of the design of the wave I and II (6-month follow-up) surveys of the Texas Adolescent Tobacco and Marketing Surveillance System (TATAMS), a longitudinal surveillance study of 6th, 8th, and 10th grade students who attended schools in Bexar, Dallas, Tarrant, Harris, or Travis counties, where the 4 largest cities in Texas (San Antonio, Dallas, Fort Worth, Houston, and Austin, respectively) are located. Methods TATAMS used a complex probability design, yielding representative estimates of these students in these counties during the 2014–2015 academic year. Weighted prevalence of the use of tobacco products, drugs and alcohol in wave I, and the percent of: (i) bias, (ii) relative bias, and (iii) relative bias ratio, between waves I and II are estimated. Results The wave I sample included 79 schools and 3,907 students. The prevalence of current cigarette, e-cigarette and hookah use at wave I was 3.5%, 7.4%, and 2.5%, respectively. Small biases, mostly less than 3.5%, were observed for nonrespondents in wave II. Conclusions Even with adaptions to the sampling methodology, the resulting sample adequately represents the target population. Results from TATAMS will have important implications for future tobacco policy in Texas and federal regulation. PMID:29098172

  12. Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey

    NASA Astrophysics Data System (ADS)

    Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.

    1994-08-01

    We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc-1. The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h-1 Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h-1 Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h-1 Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambdazero = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h-1 Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma8 (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h-1 Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the power spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have Mlim greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).

  13. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  14. Little Evidence That Time in Child Care Causes Externalizing Problems During Early Childhood in Norway

    PubMed Central

    Zachrisson, Henrik Daae; Dearing, Eric; Lekhal, Ratib; Toppelberg, Claudio O.

    2012-01-01

    Associations between maternal reports of hours in child care and children’s externalizing problems at 18 and 36 months of age were examined in a population-based Norwegian sample (n = 75,271). Within a sociopolitical context of homogenously high-quality child care, there was little evidence that high quantity of care causes externalizing problems. Using conventional approaches to handling selection bias and listwise deletion for substantial attrition in this sample, more hours in care predicted higher problem levels, yet with small effect sizes. The finding, however, was not robust to using multiple imputation for missing values. Moreover, when sibling and individual fixed-effects models for handling selection bias were used, no relation between hours and problems was evident. PMID:23311645

  15. Health indicators: eliminating bias from convenience sampling estimators.

    PubMed

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Length bias correction in one-day cross-sectional assessments - The nutritionDay study.

    PubMed

    Frantal, Sophie; Pernicka, Elisabeth; Hiesmayr, Michael; Schindler, Karin; Bauer, Peter

    2016-04-01

    A major problem occurring in cross-sectional studies is sampling bias. Length of hospital stay (LOS) differs strongly between patients and causes a length bias as patients with longer LOS are more likely to be included and are therefore overrepresented in this type of study. To adjust for the length bias higher weights are allocated to patients with shorter LOS. We determined the effect of length-bias adjustment in two independent populations. Length-bias correction is applied to the data of the nutritionDay project, a one-day multinational cross-sectional audit capturing data on disease and nutrition of patients admitted to hospital wards with right-censoring after 30 days follow-up. We applied the weighting method for estimating the distribution function of patient baseline variables based on the method of non-parametric maximum likelihood. Results are validated using data from all patients admitted to the General Hospital of Vienna between 2005 and 2009, where the distribution of LOS can be assumed to be known. Additionally, a simplified calculation scheme for estimating the adjusted distribution function of LOS is demonstrated on a small patient example. The crude median (lower quartile; upper quartile) LOS in the cross-sectional sample was 14 (8; 24) and decreased to 7 (4; 12) when adjusted. Hence, adjustment for length bias in cross-sectional studies is essential to get appropriate estimates. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  17. Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution.

    PubMed Central

    Jennions, Michael D; Møller, Anders P

    2002-01-01

    Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published. PMID:11788035

  18. Seven common mistakes in population genetics and how to avoid them.

    PubMed

    Meirmans, Patrick G

    2015-07-01

    As the data resulting from modern genotyping tools are astoundingly complex, genotyping studies require great care in the sampling design, genotyping, data analysis and interpretation. Such care is necessary because, with data sets containing thousands of loci, small biases can easily become strongly significant patterns. Such biases may already be present in routine tasks that are present in almost every genotyping study. Here, I discuss seven common mistakes that can be frequently encountered in the genotyping literature: (i) giving more attention to genotyping than to sampling, (ii) failing to perform or report experimental randomization in the laboratory, (iii) equating geopolitical borders with biological borders, (iv) testing significance of clustering output, (v) misinterpreting Mantel's r statistic, (vi) only interpreting a single value of k and (vii) forgetting that only a small portion of the genome will be associated with climate. For every of those issues, I give some suggestions how to avoid the mistake. Overall, I argue that genotyping studies would benefit from establishing a more rigorous experimental design, involving proper sampling design, randomization and better distinction of a priori hypotheses and exploratory analyses. © 2015 John Wiley & Sons Ltd.

  19. Temporal performance of amorphous selenium mammography detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Bo; Zhao Wei

    2005-01-01

    We investigated temporal performance of amorphous selenium (a-Se) detectors specifically designed for mammographic imaging. Our goal is to quantify the inherent lag and ghosting of a-Se photoconductor as a function of imaging conditions. Two small area electroded a-Se samples, one positively and the other negatively biased on the entrance side of x rays, were used in the experiments. The study of lag and ghosting was performed by delivering a number of raw exposures as experienced in screening mammography to the samples at different electric field strength E{sub Se} while measuring the current through the a-Se sample. Ghosting at different operationalmore » conditions was quantified as the percentage x-ray sensitivity (x-ray generated photocurrent measured from the sample) reduction compared to before irradiation. Lag was determined by measuring the residual current of a-Se at a given time after the end of each x-ray exposure. Both lag and ghosting were measured as a function of E{sub Se} and cumulative exposure. The values of E{sub Se} used in our experiments ranged from 1 to 20 V/{mu}m. It was found that ghosting increases with exposure and decreases with E{sub Se} for both samples because of the dominant effect of recombination between trapped electrons and x-ray generated holes. Lag on the other hand has different dependence on E{sub Se} and cumulative exposure. At E{sub Se}{<=}10 V/{mu}m, the first frame lag for both samples changed slowly with cumulative exposure, with a range of 0.2%-1.7% for the positively biased sample and 0.5%-8% for the negatively biased sample. Overall the positively biased sample has better temporal performance than the negatively biased sample due to the lower density of trapped electrons. The impact of time interval between exposures on the temporal performance was also investigated. Recovery of ghosting with longer time interval was observed, which was attributed to the neutralization of trapped electrons by injected holes through dark current.« less

  20. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  1. On the nature and correction of the spurious S-wise spiral galaxy winding bias in Galaxy Zoo 1

    NASA Astrophysics Data System (ADS)

    Hayes, Wayne B.; Davis, Darren; Silva, Pedro

    2017-04-01

    The Galaxy Zoo 1 catalogue displays a bias towards the S-wise winding direction in spiral galaxies, which has yet to be explained. The lack of an explanation confounds our attempts to verify the Cosmological Principle, and has spurred some debate as to whether a bias exists in the real Universe. The bias manifests not only in the obvious case of trying to decide if the universe as a whole has a winding bias, but also in the more insidious case of selecting which Galaxies to include in a winding direction survey. While the former bias has been accounted for in a previous image-mirroring study, the latter has not. Furthermore, the bias has never been corrected in the GZ1 catalogue, as only a small sample of the GZ1 catalogue was reexamined during the mirror study. We show that the existing bias is a human selection effect rather than a human chirality bias. In effect, the excess S-wise votes are spuriously 'stolen' from the elliptical and edge-on-disc categories, not the Z-wise category. Thus, when selecting a set of spiral galaxies by imposing a threshold T so that max (PS, PZ) > T or PS + PZ > T, we spuriously select more S-wise than Z-wise galaxies. We show that when a provably unbiased machine selects which galaxies are spirals independent of their chirality, the S-wise surplus vanishes, even if humans still determine the chirality. Thus, when viewed across the entire GZ1 sample (and by implication, the Sloan catalogue), the winding direction of arms in spiral galaxies as viewed from Earth is consistent with the flip of a fair coin.

  2. Response Rates and Response Bias for 50 Surveys of Pediatricians

    PubMed Central

    Cull, William L; O'Connor, Karen G; Sharp, Sanford; Tang, Suk-fong S

    2005-01-01

    Research Objective To track response rates across time for surveys of pediatricians, to explore whether response bias is present for these surveys, and to examine whether response bias increases with lower response rates. Data Source/Study Setting A total of 63,473 cases were gathered from 50 different surveys of pediatricians conducted by the American Academy of Pediatrics (AAP) since 1994. Thirty-one surveys targeted active U.S. members of the AAP, six targeted pediatric residents, and the remaining 13 targeted AAP-member and nonmember pediatric subspecialists. Information for the full target samples, including nonrespondents, was collected using administrative databases of the AAP and the American Board of Pediatrics. Study Design To assess bias for each survey, age, gender, location, and AAP membership type were compared for respondents and the full target sample. Correlational analyses were conducted to examine whether surveys with lower response rates had increasing levels of response bias. Principal Findings Response rates to the 50 surveys examined declined significantly across survey years (1994–2002). Response rates ranged from 52 to 81 percent with an average of 68 percent. Comparisons between respondents and the full target samples showed the respondent group to be younger, to have more females, and to have less specialty-fellow members. Response bias was not apparent for pediatricians' geographical location. The average response bias, however, was fairly small for all factors: age (0.45 years younger), gender (1.4 percentage points more females), and membership type (1.1 percentage points fewer specialty-fellow members). Gender response bias was found to be inversely associated with survey response rates (r=−0.38). Even for the surveys with the lowest response rates, amount of response bias never exceeded 5 percentage points for gender, 3 years for age, or 3 percent for membership type. Conclusions While response biases favoring women, young physicians, and nonspecialty-fellow members were found across the 52–81 percent response rates examined in this study, the amount of bias was minimal for these factors that could be tested. At least for surveys of pediatricians, more attention should be devoted by investigators to assessments of response bias rather than relying on response rates as a proxy of response bias. PMID:15663710

  3. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  4. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  5. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  6. Influences of sampling effort on detected patterns and structuring processes of a Neotropical plant-hummingbird network.

    PubMed

    Vizentin-Bugoni, Jeferson; Maruyama, Pietro K; Debastiani, Vanderlei J; Duarte, L da S; Dalsgaard, Bo; Sazima, Marlies

    2016-01-01

    Virtually all empirical ecological interaction networks to some extent suffer from undersampling. However, how limitations imposed by sampling incompleteness affect our understanding of ecological networks is still poorly explored, which may hinder further advances in the field. Here, we use a plant-hummingbird network with unprecedented sampling effort (2716 h of focal observations) from the Atlantic Rainforest in Brazil, to investigate how sampling effort affects the description of network structure (i.e. widely used network metrics) and the relative importance of distinct processes (i.e. species abundances vs. traits) in determining the frequency of pairwise interactions. By dividing the network into time slices representing a gradient of sampling effort, we show that quantitative metrics, such as interaction evenness, specialization (H2 '), weighted nestedness (wNODF) and modularity (Q; QuanBiMo algorithm) were less biased by sampling incompleteness than binary metrics. Furthermore, the significance of some network metrics changed along the sampling effort gradient. Nevertheless, the higher importance of traits in structuring the network was apparent even with small sampling effort. Our results (i) warn against using very poorly sampled networks as this may bias our understanding of networks, both their patterns and structuring processes, (ii) encourage the use of quantitative metrics little influenced by sampling when performing spatio-temporal comparisons and (iii) indicate that in networks strongly constrained by species traits, such as plant-hummingbird networks, even small sampling is sufficient to detect their relative importance for the frequencies of interactions. Finally, we argue that similar effects of sampling are expected for other highly specialized subnetworks. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  7. A Note on Structural Equation Modeling Estimates of Reliability

    ERIC Educational Resources Information Center

    Yang, Yanyun; Green, Samuel B.

    2010-01-01

    Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…

  8. Problems and Limitations in Studies on Screening for Language Delay

    ERIC Educational Resources Information Center

    Eriksson, Marten; Westerlund, Monica; Miniscalco, Carmela

    2010-01-01

    This study discusses six common methodological limitations in screening for language delay (LD) as illustrated in 11 recent studies. The limitations are (1) whether the studies define a target population, (2) whether the recruitment procedure is unbiased, (3) attrition, (4) verification bias, (5) small sample size and (6) inconsistencies in choice…

  9. How Broad Liberal Arts Training Produces Phd Economists: Carleton's Story

    ERIC Educational Resources Information Center

    Bourne, Jenny; Grawe, Nathan D.

    2015-01-01

    Several recent studies point to strong performance in economics PhD programs of graduates from liberal arts colleges. While every undergraduate program is unique and the likelihood of selection bias combines with small sample sizes to caution against drawing strong conclusions, the authors reflect on their experience at Carleton College to…

  10. Reducing bias in survival under non-random temporary emigration

    USGS Publications Warehouse

    Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann

    2014-01-01

    Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.

  11. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.

  12. Bumpus in the snake den: effects of sex, size, and body condition on mortality of red-sided garter snakes.

    PubMed

    Shine, R; LeMaster, M P; Moore, I T; Olsson, M M; Mason, R T

    2001-03-01

    Huge breeding aggregations of red-sided garter snakes (Thamnophis sirtalis parietalis) at overwintering dens in Manitoba provide a unique opportunity to identify sources of mortality and to clarify factors that influence a snake's vulnerability to these factors. Comparisons of sexes, body sizes, and body condition of more than 1000 dead snakes versus live animals sampled at the same time reveal significant biases. Three primary sources of mortality were identified. Predation by crows, Corvus brachyrhynchos (590 snakes killed), was focussed mostly on small snakes of both sexes. Crows generally removed the snake's liver and left the carcass, but very small snakes were sometimes brought back to the nest. Suffocation beneath massive piles of other snakes within the den (301 dead animals) involved mostly small males and (to a lesser extent) large females; snakes in poor body condition were particularly vulnerable. Many emaciated snakes (n = 142, mostly females) also died without overt injuries, probably due to depleted energy reserves. These biases in vulnerability are readily interpretable from information on behavioral ecology of the snakes. For example, sex biases in mortality reflect differences in postemergence behavior and locomotor capacity, the greater attractiveness of larger females to males, and the high energy costs of reproduction for females.

  13. Digital droplet multiple displacement amplification (ddMDA) for whole genome sequencing of limited DNA samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, Minsoung; Light, Yooli K.; Meagher, Robert J.

    Here, multiple displacement amplification (MDA) is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples) before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA) technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently,more » the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet), ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli) compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.« less

  14. Digital droplet multiple displacement amplification (ddMDA) for whole genome sequencing of limited DNA samples

    DOE PAGES

    Rhee, Minsoung; Light, Yooli K.; Meagher, Robert J.; ...

    2016-05-04

    Here, multiple displacement amplification (MDA) is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples) before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA) technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently,more » the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet), ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli) compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.« less

  15. Influence of various water quality sampling strategies on load estimates for small streams

    USGS Publications Warehouse

    Robertson, Dale M.; Roerish, Eric D.

    1999-01-01

    Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.

  16. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  17. Temperature Dependence Of Current-Voltage Characteristics Of Au/p-GaAsN Schottky Barrier Diodes, With Small N Content

    NASA Astrophysics Data System (ADS)

    Rangel-Kuoppa, Victor-Tapio; Reentilä, Outi; Sopanen, Markku; Lipsanen, Harri

    2011-12-01

    The temperature dependent current-voltage (IVT) measurements on Au Schottky barrier diodes made on intrinsically p-type GaAs1-xNx were carried out. Three samples with small N content (x = 0.5%, 0.7% and 1%) were studied. The temperature range was 10-320 K. All contacts were found to be of Schottky type. The ideality factor and the apparent barrier height calculated by using thermionic emission (TE) theory show a strong temperature dependence. The current voltage (IV) curves are fitted based on the TE theory, yielding a zero-bias carrier height (ΦB0) and a ideality factor (n) that decrease and increase with decreasing temperature, respectively. The linear fitting of ΦB0 vs n and its subsequent evaluation for n = 1 give a zero-bias ΦB0 in the order of 0.35-0.4 eV. From the reverse-bias IV study, it is found that the experimental carrier density (NA) values increase with increasing temperature and are in agreement with the intrinsic carrier concentration for GaAs.

  18. Improving power and robustness for detecting genetic association with extreme-value sampling design.

    PubMed

    Chen, Hua Yun; Li, Mingyao

    2011-12-01

    Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.

  19. Efficacy of attention bias modification using threat and appetitive stimuli: a meta-analytic review.

    PubMed

    Beard, Courtney; Sawyer, Alice T; Hofmann, Stefan G

    2012-12-01

    Attention bias modification (ABM) protocols aim to modify attentional biases underlying many forms of pathology. Our objective was to conduct an effect size analysis of ABM across a wide range of samples and psychological problems. We conducted a literature search using PubMed, PsycInfo, and author searches to identify randomized studies that examined the effects of ABM on attention and subjective experiences. We identified 37 studies (41 experiments) totaling 2,135 participants who were randomized to training toward neutral, positive, threat, or appetitive stimuli or to a control condition. The effect size estimate for changes in attentional bias was large for the neutral versus threat comparisons (g=1.06), neutral versus appetitive (g=1.41), and neutral versus control comparisons (g=0.80), and small for positive versus control (g=0.24). The effects of ABM on attention bias were moderated by stimulus type (words vs. pictures) and sample characteristics (healthy vs. high symptomatology). Effect sizes of ABM on subjective experiences ranged from 0.03 to 0.60 for postchallenge outcomes, -0.31 to 0.51 for posttreatment, and were moderated by number of training sessions, stimulus type, and stimulus orientation (top/bottom vs. left/right). Fail-safe N calculations suggested that the effect size estimates were robust for the training effects on attentional biases, but not for the effect on subjective experiences. ABM studies using threat stimuli produced significant effects on attention bias across comparison conditions, whereas appetitive stimuli produced changes in attention only when comparing appetitive versus neutral conditions. ABM has a moderate and robust effect on attention bias when using threat stimuli. Further studies are needed to determine whether these effects are also robust when using appetitive stimuli and for affecting subjective experiences. Copyright © 2012. Published by Elsevier Ltd.

  20. Did hydrographic sampling capture global and regional deep ocean heat content trends accurately between 1990-2010?

    NASA Astrophysics Data System (ADS)

    Garry, Freya; McDonagh, Elaine; Blaker, Adam; Roberts, Chris; Desbruyères, Damien; King, Brian

    2017-04-01

    Estimates of heat content change in the deep oceans (below 2000 m) over the last thirty years are obtained from temperature measurements made by hydrographic survey ships. Cruises occupy the same tracks across an ocean basin approximately every 5+ years. Measurements may not be sufficiently frequent in time or space to allow accurate evaluation of total ocean heat content (OHC) and its rate of change. It is widely thought that additional deep ocean sampling will also aid understanding of the mechanisms for OHC change on annual to decadal timescales, including how OHC varies regionally under natural and anthropogenically forced climate change. Here a 0.25˚ ocean model is used to investigate the magnitude of uncertainties and biases that exist in estimates of deep ocean temperature change from hydrographic sections due to their infrequent timing and sparse spatial distribution during 1990 - 2010. Biases in the observational data may be due to lack of spatial coverage (not enough sections covering the basin), lack of data between occupations (typically 5-10 years apart) and due to occupations not closely spanning the time period of interest. Between 1990 - 2010, the modelled biases globally are comparatively small in the abyssal ocean below 3500 m although regionally certain biases in heat flux into the 4000 - 6000 m layer can be up to 0.05 Wm-2. Biases in the heat flux into the deep 2000 - 4000 m layer due to either temporal or spatial sampling uncertainties are typically much larger and can be over 0.1 Wm-2 across an ocean. Overall, 82% of the warming trend below 2000 m is captured by observational-style sampling in the model. However, at 2500 m (too deep for additional temperature information to be inferred from upper ocean Argo) less than two thirds of the magnitude of the global warming trend is obtained, and regionally large biases exist in the Atlantic, Southern and Indian Oceans, highlighting the need for widespread improved deep ocean temperature sampling. In addition to bias due to infrequent sampling, moving the timings of occupations by a few months generates relatively large uncertainty due to intra-annual variability in deep ocean model temperature, further strengthening the case for high temporal frequency observations in the deep ocean (as could be achieved using deep ocean autonomous float technologies). Biases due to different uncertainties can have opposing signs and differ in relative importance both regionally and with depth revealing the importance of reducing all uncertainties (both spatial and temporal) simultaneously in future deep ocean observing design.

  1. Effects of biasing on the galaxy power spectrum at large scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran Jimenez, Jose; Departamento de Fisica Teorica, Universidad Complutense de Madrid, 28040, Madrid; Durrer, Ruth

    2011-05-15

    In this paper we study the effect of biasing on the power spectrum at large scales. We show that even though nonlinear biasing does introduce a white noise contribution on large scales, the P(k){proportional_to}k{sup n} behavior of the matter power spectrum on large scales may still be visible and above the white noise for about one decade. We show, that the Kaiser biasing scheme which leads to linear bias of the correlation function on large scales, also generates a linear bias of the power spectrum on rather small scales. This is a consequence of the divergence on small scales ofmore » the pure Harrison-Zeldovich spectrum. However, biasing becomes k dependent if we damp the underlying power spectrum on small scales. We also discuss the effect of biasing on the baryon acoustic oscillations.« less

  2. Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey

    NASA Technical Reports Server (NTRS)

    Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.

    1994-01-01

    We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc(exp -1). The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h(exp -1) Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h(exp -1) Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h(exp -1) Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambda(sub zero) = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h(exp -1) Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma(sub 8) (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h(exp -1) Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the pwer spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have M(sub lim) greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).

  3. Toward a Comprehensive Understanding of Executive Cognitive Function in Implicit Racial Bias

    PubMed Central

    Ito, Tiffany A.; Friedman, Naomi P.; Bartholow, Bruce D.; Correll, Joshua; Loersch, Chris; Altamirano, Lee J.; Miyake, Akira

    2014-01-01

    Although performance on laboratory-based implicit bias tasks often is interpreted strictly in terms of the strength of automatic associations, recent evidence suggests that such tasks are influenced by higher-order cognitive control processes, so-called executive functions (EFs). However, extant work in this area has been limited by failure to account for the unity and diversity of EFs, focus on only a single measure of bias and/or EF, and relatively small sample sizes. The current study sought to comprehensively model the relation between individual differences in EFs and the expression of racial bias in three commonly used laboratory measures. Participants (N=485) completed a battery of EF tasks (session 1) and three racial bias tasks (session 2), along with numerous individual difference questionnaires. The main findings were as follows: (1) measures of implicit bias were only weakly intercorrelated; (2) EF and estimates of automatic processes both predicted implicit bias and also interacted, such that the relation between automatic processes and bias expression was reduced at higher levels of EF; (3) specific facets of EF were differentially associated with overall task performance and controlled processing estimates across different bias tasks; (4) EF did not moderate associations between implicit and explicit measures of bias; and (5) external, but not internal, motivation to control prejudice depended on EF to reduce bias expression. Findings are discussed in terms of the importance of global and specific EF abilities in determining expression of implicit racial bias. PMID:25603372

  4. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. A minimalist approach to bias estimation for passive sensor measurements with targets of opportunity

    NASA Astrophysics Data System (ADS)

    Belfadel, Djedjiga; Osborne, Richard W.; Bar-Shalom, Yaakov

    2013-09-01

    In order to carry out data fusion, registration error correction is crucial in multisensor systems. This requires estimation of the sensor measurement biases. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. This paper provides a solution for bias estimation for the minimum number of passive sensors (two), when only targets of opportunity are available. The sensor measurements are assumed time-coincident (synchronous) and perfectly associated. Since these sensors provide only line of sight (LOS) measurements, the formation of a single composite Cartesian measurement obtained from fusing the LOS measurements from different sensors is needed to avoid the need for nonlinear filtering. We evaluate the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.

  6. Neon produced by solar cosmic rays in ordinary chondrites

    NASA Astrophysics Data System (ADS)

    Roth, Antoine S. G.; Trappitsch, Reto; Metzler, Knut; Hofmann, Beda A.; Leya, Ingo

    2017-06-01

    Solar-cosmic-ray-produced Ne (SCR-Ne), in the form of low cosmogenic 21Ne/22Ne ratios (21Ne/22Necos <0.8), is more likely to be found in rare meteorite classes, like Martian meteorites, than in ordinary chondrites. This may be the result of a sampling bias: SCR-Ne is better preserved in meteorites with small preatmospheric radii and these specimens are often only studied if they belong to unusual or rare classes. We measured He and Ne isotopic concentrations and nuclear tracks in 25 small unpaired ordinary chondrites from Oman. Most chondrites have been intensively heated during atmospheric entry as evidenced by the disturbed track records, the low 3He/21Ne ratios, the low 4He concentrations, and the high peak release temperatures. Concentration depth profiles indicate significant degassing; however, the Ne isotopes are mainly undisturbed. Remarkably, six chondrites have low 21Ne/22Necos in the range 0.711-0.805. Using a new physical model for the calculation of SCR production rates, we show that four of the chondrites contain up to 20% of SCR-Ne; they are analyzed in terms of preatmospheric sizes, cosmic ray exposure ages, mass ablation losses, and orbits. We conclude that SCR-Ne is preserved, regardless of the meteorite class, in specimens with small preatmospheric radii. Sampling bias explains the predominance of SCR-Ne in rare meteorites, although we cannot exclude that SCR-Ne is more common in Martian meteorites than it is in small ordinary chondrites.

  7. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  8. Unit bias. A new heuristic that helps explain the effect of portion size on food intake.

    PubMed

    Geier, Andrew B; Rozin, Paul; Doros, Gheorghe

    2006-06-01

    People seem to think that a unit of some entity (with certain constraints) is the appropriate and optimal amount. We refer to this heuristic as unit bias. We illustrate unit bias by demonstrating large effects of unit segmentation, a form of portion control, on food intake. Thus, people choose, and presumably eat, much greater weights of Tootsie Rolls and pretzels when offered a large as opposed to a small unit size (and given the option of taking as many units as they choose at no monetary cost). Additionally, they consume substantially more M&M's when the candies are offered with a large as opposed to a small spoon (again with no limits as to the number of spoonfuls to be taken). We propose that unit bias explains why small portion sizes are effective in controlling consumption; in some cases, people served small portions would simply eat additional portions if it were not for unit bias. We argue that unit bias is a general feature in human choice and discuss possible origins of this bias, including consumption norms.

  9. The effect of small-wave modulation on the electromagnetic bias

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto; Kim, Yunjin; Martin, Jan M.

    1992-01-01

    The effect of the modulation of small ocean waves by large waves on the physical mechanism of the EM bias is examined by conducting a numerical scattering experiment which does not assume the applicability of geometric optics. The modulation effect of the large waves on the small waves is modeled using the principle of conservation of wave action and includes the modulation of gravity-capillary waves. The frequency dependence and magnitude of the EM bias is examined for a simplified ocean spectral model as a function of wind speed. These calculations make it possible to assess the validity of previous assumptions made in the theory of the EM bias, with respect to both scattering and hydrodynamic effects. It is found that the geometric optics approximation is inadequate for predictions of the EM bias at typical radar altimeter frequencies, while the improved scattering calculations provide a frequency dependence of the EM bias which is in qualitative agreement with observation. For typical wind speeds, the EM bias contribution due to small-wave modulation is of the same order as that due to modulation by the nonlinearities of the large-scale waves.

  10. Potential sources of analytical bias and error in selected trace element data-quality analyses

    USGS Publications Warehouse

    Paul, Angela P.; Garbarino, John R.; Olsen, Lisa D.; Rosen, Michael R.; Mebane, Christopher A.; Struzeski, Tedmund M.

    2016-09-28

    Potential sources of analytical bias and error associated with laboratory analyses for selected trace elements where concentrations were greater in filtered samples than in paired unfiltered samples were evaluated by U.S. Geological Survey (USGS) Water Quality Specialists in collaboration with the USGS National Water Quality Laboratory (NWQL) and the Branch of Quality Systems (BQS).Causes for trace-element concentrations in filtered samples to exceed those in associated unfiltered samples have been attributed to variability in analytical measurements, analytical bias, sample contamination either in the field or laboratory, and (or) sample-matrix chemistry. These issues have not only been attributed to data generated by the USGS NWQL but have been observed in data generated by other laboratories. This study continues the evaluation of potential analytical bias and error resulting from matrix chemistry and instrument variability by evaluating the performance of seven selected trace elements in paired filtered and unfiltered surface-water and groundwater samples collected from 23 sampling sites of varying chemistries from six States, matrix spike recoveries, and standard reference materials.Filtered and unfiltered samples have been routinely analyzed on separate inductively coupled plasma-mass spectrometry instruments. Unfiltered samples are treated with hydrochloric acid (HCl) during an in-bottle digestion procedure; filtered samples are not routinely treated with HCl as part of the laboratory analytical procedure. To evaluate the influence of HCl on different sample matrices, an aliquot of the filtered samples was treated with HCl. The addition of HCl did little to differentiate the analytical results between filtered samples treated with HCl from those samples left untreated; however, there was a small, but noticeable, decrease in the number of instances where a particular trace-element concentration was greater in a filtered sample than in the associated unfiltered sample for all trace elements except selenium. Accounting for the small dilution effect (2 percent) from the addition of HCl, as required for the in-bottle digestion procedure for unfiltered samples, may be one step toward decreasing the number of instances where trace-element concentrations are greater in filtered samples than in paired unfiltered samples.The laboratory analyses of arsenic, cadmium, lead, and zinc did not appear to be influenced by instrument biases. These trace elements showed similar results on both instruments used to analyze filtered and unfiltered samples. The results for aluminum and molybdenum tended to be higher on the instrument designated to analyze unfiltered samples; the results for selenium tended to be lower. The matrices used to prepare calibration standards were different for the two instruments. The instrument designated for the analysis of unfiltered samples was calibrated using standards prepared in a nitric:hydrochloric acid (HNO3:HCl) matrix. The instrument designated for the analysis of filtered samples was calibrated using standards prepared in a matrix acidified only with HNO3. Matrix chemistry may have influenced the responses of aluminum, molybdenum, and selenium on the two instruments. The best analytical practice is to calibrate instruments using calibration standards prepared in matrices that reasonably match those of the samples being analyzed.Filtered and unfiltered samples were spiked over a range of trace-element concentrations from less than 1 to 58 times ambient concentrations. The greater the magnitude of the trace-element spike concentration relative to the ambient concentration, the greater the likelihood spike recoveries will be within data control guidelines (80–120 percent). Greater variability in spike recoveries occurred when trace elements were spiked at concentrations less than 10 times the ambient concentration. Spike recoveries that were considerably lower than 90 percent often were associated with spiked concentrations substantially lower than what was present in the ambient sample. Because the main purpose of spiking natural water samples with known quantities of a particular analyte is to assess possible matrix effects on analytical results, the results of this study stress the importance of spiking samples at concentrations that are reasonably close to what is expected but sufficiently high to exceed analytical variability. Generally, differences in spike recovery results between paired filtered and unfiltered samples were minimal when samples were analyzed on the same instrument.Analytical results for trace-element concentrations in ambient filtered and unfiltered samples greater than 10 and 40 μg/L, respectively, were within the data-quality objective for precision of ±25 percent. Ambient trace-element concentrations in filtered samples greater than the long-term method detection limits but less than 10 μg/L failed to meet the data-quality objective for precision for at least one trace element in about 54 percent of the samples. Similarly, trace-element concentrations in unfiltered samples greater than the long-term method detection limits but less than 40 μg/L failed to meet this data-quality objective for at least one trace-element analysis in about 58 percent of the samples. Although, aluminum and zinc were particularly problematic, limited re-analyses of filtered and unfiltered samples appeared to improve otherwise failed analytical precision.The evaluation of analytical bias using standard reference materials indicate a slight low bias for results for arsenic, cadmium, selenium, and zinc. Aluminum and molybdenum show signs of high bias. There was no observed bias, as determined using the standard reference materials, during the analysis of lead.

  11. Rational Learning and Information Sampling: On the "Naivety" Assumption in Sampling Explanations of Judgment Biases

    ERIC Educational Resources Information Center

    Le Mens, Gael; Denrell, Jerker

    2011-01-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them.…

  12. An investigation of bias in a study of nuclear shipyard workers.

    PubMed

    Greenberg, E R; Rosner, B; Hennekens, C; Rinsky, R; Colton, T

    1985-02-01

    The authors examined discrepant findings between a 1978 proportional mortality study and a 1981 cohort study of workers at the Portsmouth, New Hampshire, Naval Shipyard to determine whether the healthy worker effect, selection bias, or measurement bias could explain why only the proportional mortality study found excess cancer deaths among nuclear workers. Lower mortality from noncancer causes in nuclear workers (the healthy worker effect) partly accounted for the observed elevated cancer proportional mortality. More important, however, was measurement bias which occurred in the proportional mortality study when nuclear workers who had not died of cancer were misclassified as not being nuclear workers based on information from their next of kin, thereby creating a spurious association. Although the proportional mortality study was based on a small sample of all deaths occurring in the cohort, selection bias did not contribute materially to the discrepant results for total cancer deaths. With regard to leukemia, misclassification of occupation in the proportional mortality study and disagreement about cause of death accounted for some of the reported excess deaths.

  13. Free energy landscapes of small peptides in an implicit solvent model determined by force-biased multicanonical molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Watanabe, Yukihisa S.; Kim, Jae Gil; Fukunishi, Yoshifumi; Nakamura, Haruki

    2004-12-01

    In order to investigate whether the implicit solvent (GB/SA) model could reproduce the free energy landscapes of peptides, the potential of mean forces (PMFs) of eight tripeptides was examined and compared with the PMFs of the explicit water model. The force-biased multicanonical molecular dynamics method was used for the enhanced conformational sampling. Consequently, the GB/SA model reproduced almost all the global and local minima in the PMFs observed with the explicit water model. However, the GB/SA model overestimated frequencies of the structures that are stabilized by intra-peptide hydrogen bonds.

  14. Exposure reduces negative bias in self-rated performance in public speaking fearful participants.

    PubMed

    Cheng, Joyce; Niles, Andrea N; Craske, Michelle G

    2017-03-01

    Individuals with public speaking anxiety (PSA) under-rate their performance compared to objective observers. The present study examined whether exposure reduces the discrepancy between self and observer performance ratings and improved observer-rated performance in individuals with PSA. PSA participants gave a speech in front of a small audience and rated their performance using a questionnaire before and after completing repeated exposures to public speaking. Non-anxious control participants gave a speech and completed the questionnaire one time only. Objective observers watched videos of the speeches and rated performance using the same questionnaire. PSA participants underrated their performance to a greater degree than did controls prior to exposure, but also performed significantly more poorly than did controls when rated objectively. Bias significantly decreased and objective-rated performance significantly increased following completion of exposure in PSA participants, and on one performance measure, anxious participants no longer showed a greater discrepancy between self and observer performance ratings compared to controls. The study employed non-clinical student sample, but the results should be replicated in clinical anxiety samples. These findings indicate that exposure alone significantly reduces negative performance bias among PSA individuals, but additional exposure or additional interventions may be necessary to fully correct bias and performance deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Fine-scale population genetic structure and sex-biased dispersal in the smooth snake (Coronella austriaca) in southern England.

    PubMed

    Pernetta, A P; Allen, J A; Beebee, T J C; Reading, C J

    2011-09-01

    Human-induced alteration of natural habitats has the potential to impact on the genetic structuring of remnant populations at multiple spatial scales. Species from higher trophic levels, such as snakes, are expected to be particularly susceptible to land-use changes. We examined fine-scale population structure and looked for evidence of sex-biased dispersal in smooth snakes (Coronella austriaca), sampled from 10 heathland localities situated within a managed coniferous forest in Dorset, United Kingdom. Despite the limited distances between heathland areas (maximum <6 km), there was a small but significant structuring of populations based on eight microsatellite loci. This followed an isolation-by-distance model using both straight line and 'biological' distances between sampling sites, suggesting C. austriaca's low vagility as the causal factor, rather than closed canopy conifer forest exerting an effect as a barrier to dispersal. Within population comparisons of male and female snakes showed evidence for sex-biased dispersal, with three of four analyses finding significantly higher dispersal in males than in females. We suggest that the fine-scale spatial genetic structuring and sex-biased dispersal have important implications for the conservation of C. austriaca, and highlight the value of heathland areas within commercial conifer plantations with regards to their future management.

  16. Blinded and unblinded internal pilot study designs for clinical trials with count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-07-01

    Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    PubMed

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  18. Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias

    PubMed Central

    Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607

  19. Assessing total nitrogen in surface-water samples--precision and bias of analytical and computational methods

    USGS Publications Warehouse

    Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.

    2013-01-01

    The characterization of total-nitrogen (TN) concentrations is an important component of many surface-water-quality programs. However, three widely used methods for the determination of total nitrogen—(1) derived from the alkaline-persulfate digestion of whole-water samples (TN-A); (2) calculated as the sum of total Kjeldahl nitrogen and dissolved nitrate plus nitrite (TN-K); and (3) calculated as the sum of dissolved nitrogen and particulate nitrogen (TN-C)—all include inherent limitations. A digestion process is intended to convert multiple species of nitrogen that are present in the sample into one measureable species, but this process may introduce bias. TN-A results can be negatively biased in the presence of suspended sediment, and TN-K data can be positively biased in the presence of elevated nitrate because some nitrate is reduced to ammonia and is therefore counted twice in the computation of total nitrogen. Furthermore, TN-C may not be subject to bias but is comparatively imprecise. In this study, the effects of suspended-sediment and nitrate concentrations on the performance of these TN methods were assessed using synthetic samples developed in a laboratory as well as a series of stream samples. A 2007 laboratory experiment measured TN-A and TN-K in nutrient-fortified solutions that had been mixed with varying amounts of sediment-reference materials. This experiment identified a connection between suspended sediment and negative bias in TN-A and detected positive bias in TN-K in the presence of elevated nitrate. A 2009–10 synoptic-field study used samples from 77 stream-sampling sites to confirm that these biases were present in the field samples and evaluated the precision and bias of TN methods. The precision of TN-C and TN-K depended on the precision and relative amounts of the TN-component species used in their respective TN computations. Particulate nitrogen had an average variability (as determined by the relative standard deviation) of 13 percent. However, because particulate nitrogen constituted only 14 percent, on average, of TN-C, the precision of the TN-C method approached that of the method for dissolved nitrogen (2.3 percent). On the other hand, total Kjeldahl nitrogen (having a variability of 7.6 percent) constituted an average of 40 percent of TN-K, suggesting that the reduced precision of the Kjeldahl digestion may affect precision of the TN-K estimates. For most samples, the precision of TN computed as TN-C would be better (lower variability) than the precision of TN-K. In general, TN-A precision (having a variability of 2.1 percent) was superior to TN-C and TN-K methods. The laboratory experiment indicated that negative bias in TN-A was present across the entire range of sediment concentration and increased as sediment concentration increased. This suggested that reagent limitation was not the predominant cause of observed bias in TN-A. Furthermore, analyses of particulate nitrogen present in digest residues provided an almost complete accounting for the nitrogen that was underestimated by alkaline-persulfate digestion. This experiment established that, for the reference materials at least, negative bias in TN-A was caused primarily by the sequestration of some particulate nitrogen that was refractory to the digestion process. TN-K biases varied between positive and negative values in the laboratory experiment. Positive bias in TN-K is likely the result of the unintended reduction of a small and variable amount of nitrate to ammonia during the Kjeldahl digestion process. Negative TN-K bias may be the result of the sequestration of a portion of particulate nitrogen during the digestion process. Negative bias in TN-A was present across the entire range of suspended-sediment concentration (1 to 14,700 milligrams per liter [mg/L]) in the synoptic-field study, with relative bias being nearly as great at sediment concentrations below 10 mg/L (median of -3.5 percent) as that observed at sediment concentrations up to 750 mg/L (median of -4.4 percent). This lent support to the laboratory-experiment finding that some particulate nitrogen is sequestered during the digestion process, and demonstrated that negative TN-A bias was present in samples with very low suspended-sediment concentrations. At sediment concentrations above 750 mg/L, the negative TN-A bias became more likely and larger (median of -13.2 percent), suggesting a secondary mechanism of bias, such as reagent limitation. From a geospatial perspective, trends in TN-A bias were not explained by selected basin characteristics. Though variable, TN-K bias generally was positive in the synoptic-field study (median of 3.1 percent), probably as a result of the reduction of nitrate. Three alternative approaches for assessing TN in surface water were evaluated for their impacts on existing and future sampling programs. Replacing TN-A with TN-C would remove the bias from subsequent data, but this approach also would introduce discontinuity in historical records. Replacing TN-K with TN-C would lead to the removal of positive bias in TN-K in the presence of elevated nitrate. However, in addition to the issues that may arise from a discontinuity in the data record, this approach may not be applicable to regulatory programs that require the use of total Kjeldahl nitrogen for stream assessment. By adding TN-C to existing TN-A or TN-K analyses, historical-data continuity would be preserved and the transitional period could be used to minimize the impact of bias on data analyses. This approach, however, imposes the greatest burdens on field operations and in terms of analytical costs. The variation in these impacts on different sampling programs will challenge U.S. Geological Survey scientists attempting to establish uniform standards for TN sample collection and analytical determinations.

  20. Weighing galaxy clusters with gas. II. On the origin of hydrostatic mass bias in ΛCDM galaxy clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Kaylea; Nagai, Daisuke; Yu, Liang

    2014-02-20

    The use of galaxy clusters as cosmological probes hinges on our ability to measure their masses accurately and with high precision. Hydrostatic mass is one of the most common methods for estimating the masses of individual galaxy clusters, which suffer from biases due to departures from hydrostatic equilibrium. Using a large, mass-limited sample of massive galaxy clusters from a high-resolution hydrodynamical cosmological simulation, in this work we show that in addition to turbulent and bulk gas velocities, acceleration of gas introduces biases in the hydrostatic mass estimate of galaxy clusters. In unrelaxed clusters, the acceleration bias is comparable to themore » bias due to non-thermal pressure associated with merger-induced turbulent and bulk gas motions. In relaxed clusters, the mean mass bias due to acceleration is small (≲ 3%), but the scatter in the mass bias can be reduced by accounting for gas acceleration. Additionally, this acceleration bias is greater in the outskirts of higher redshift clusters where mergers are more frequent and clusters are accreting more rapidly. Since gas acceleration cannot be observed directly, it introduces an irreducible bias for hydrostatic mass estimates. This acceleration bias places limits on how well we can recover cluster masses from future X-ray and microwave observations. We discuss implications for cluster mass estimates based on X-ray, Sunyaev-Zel'dovich effect, and gravitational lensing observations and their impact on cluster cosmology.« less

  1. Weighing Galaxy Clusters with Gas. II. On the Origin of Hydrostatic Mass Bias in ΛCDM Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Nelson, Kaylea; Lau, Erwin T.; Nagai, Daisuke; Rudd, Douglas H.; Yu, Liang

    2014-02-01

    The use of galaxy clusters as cosmological probes hinges on our ability to measure their masses accurately and with high precision. Hydrostatic mass is one of the most common methods for estimating the masses of individual galaxy clusters, which suffer from biases due to departures from hydrostatic equilibrium. Using a large, mass-limited sample of massive galaxy clusters from a high-resolution hydrodynamical cosmological simulation, in this work we show that in addition to turbulent and bulk gas velocities, acceleration of gas introduces biases in the hydrostatic mass estimate of galaxy clusters. In unrelaxed clusters, the acceleration bias is comparable to the bias due to non-thermal pressure associated with merger-induced turbulent and bulk gas motions. In relaxed clusters, the mean mass bias due to acceleration is small (lsim 3%), but the scatter in the mass bias can be reduced by accounting for gas acceleration. Additionally, this acceleration bias is greater in the outskirts of higher redshift clusters where mergers are more frequent and clusters are accreting more rapidly. Since gas acceleration cannot be observed directly, it introduces an irreducible bias for hydrostatic mass estimates. This acceleration bias places limits on how well we can recover cluster masses from future X-ray and microwave observations. We discuss implications for cluster mass estimates based on X-ray, Sunyaev-Zel'dovich effect, and gravitational lensing observations and their impact on cluster cosmology.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okumura, Teppei; Seljak, Uroš; Desjacques, Vincent, E-mail: teppei@ewha.ac.kr, E-mail: useljak@berkeley.edu, E-mail: dvince@physik.uzh.ch

    It was recently shown that the power spectrum in redshift space can be written as a sum of cross-power spectra between number weighted velocity moments, of which the lowest are density and momentum density. We investigate numerically the properties of these power spectra for simulated galaxies and dark matter halos and compare them to the dark matter power spectra, generalizing the concept of the bias in density-density power spectra. Because all of the quantities are number weighted this approach is well defined even for sparse systems such as massive halos. This contrasts to the previous approaches to RSD where velocitymore » correlations have been explored, but velocity field is a poorly defined concept for sparse systems. We find that the number density weighting leads to a strong scale dependence of the bias terms for momentum density auto-correlation and cross-correlation with density. This trend becomes more significant for the more biased halos and leads to an enhancement of RSD power relative to the linear theory. Fingers-of-god effects, which in this formalism come from the correlations of the higher order moments beyond the momentum density, lead to smoothing of the power spectrum and can reduce this enhancement of power from the scale dependent bias, but are relatively small for halos with no small scale velocity dispersion. In comparison, for a more realistic galaxy sample with satellites the small scale velocity dispersion generated by satellite motions inside the halos leads to a larger power suppression on small scales, but this depends on the satellite fraction and on the details of how the satellites are distributed inside the halo. We investigate several statistics such as the two-dimensional power spectrum P(k,μ), where μ is the angle between the Fourier mode and line of sight, its multipole moments, its powers of μ{sup 2}, and configuration space statistics. Overall we find that the nonlinear effects in realistic galaxy samples such as luminous red galaxies affect the redshift space clustering on very large scales: for example, the quadrupole moment is affected by 10% for k < 0.1hMpc{sup −1}, which means that these effects need to be understood if we want to extract cosmological information from the redshift space distortions.« less

  3. Toward a comprehensive understanding of executive cognitive function in implicit racial bias.

    PubMed

    Ito, Tiffany A; Friedman, Naomi P; Bartholow, Bruce D; Correll, Joshua; Loersch, Chris; Altamirano, Lee J; Miyake, Akira

    2015-02-01

    Although performance on laboratory-based implicit bias tasks often is interpreted strictly in terms of the strength of automatic associations, recent evidence suggests that such tasks are influenced by higher-order cognitive control processes, so-called executive functions (EFs). However, extant work in this area has been limited by failure to account for the unity and diversity of EFs, focus on only a single measure of bias and/or EF, and relatively small sample sizes. The current study sought to comprehensively model the relation between individual differences in EFs and the expression of racial bias in 3 commonly used laboratory measures. Participants (N = 485) completed a battery of EF tasks (Session 1) and 3 racial bias tasks (Session 2), along with numerous individual difference questionnaires. The main findings were as follows: (a) measures of implicit bias were only weakly intercorrelated; (b) EF and estimates of automatic processes both predicted implicit bias and also interacted, such that the relation between automatic processes and bias expression was reduced at higher levels of EF; (c) specific facets of EF were differentially associated with overall task performance and controlled processing estimates across different bias tasks; (d) EF did not moderate associations between implicit and explicit measures of bias; and (e) external, but not internal, motivation to control prejudice depended on EF to reduce bias expression. Findings are discussed in terms of the importance of global and specific EF abilities in determining expression of implicit racial bias. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  4. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    PubMed

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  5. Evaluation of quality-control data collected by the U.S. Geological Survey for routine water-quality activities at the Idaho National Laboratory and vicinity, southeastern Idaho, 2002-08

    USGS Publications Warehouse

    Rattray, Gordon W.

    2014-01-01

    Quality-control (QC) samples were collected from 2002 through 2008 by the U.S. Geological Survey, in cooperation with the U.S. Department of Energy, to ensure data robustness by documenting the variability and bias of water-quality data collected at surface-water and groundwater sites at and near the Idaho National Laboratory. QC samples consisted of 139 replicates and 22 blanks (approximately 11 percent of the number of environmental samples collected). Measurements from replicates were used to estimate variability (from field and laboratory procedures and sample heterogeneity), as reproducibility and reliability, of water-quality measurements of radiochemical, inorganic, and organic constituents. Measurements from blanks were used to estimate the potential contamination bias of selected radiochemical and inorganic constituents in water-quality samples, with an emphasis on identifying any cross contamination of samples collected with portable sampling equipment. The reproducibility of water-quality measurements was estimated with calculations of normalized absolute difference for radiochemical constituents and relative standard deviation (RSD) for inorganic and organic constituents. The reliability of water-quality measurements was estimated with pooled RSDs for all constituents. Reproducibility was acceptable for all constituents except dissolved aluminum and total organic carbon. Pooled RSDs were equal to or less than 14 percent for all constituents except for total organic carbon, which had pooled RSDs of 70 percent for the low concentration range and 4.4 percent for the high concentration range. Source-solution and equipment blanks were measured for concentrations of tritium, strontium-90, cesium-137, sodium, chloride, sulfate, and dissolved chromium. Field blanks were measured for the concentration of iodide. No detectable concentrations were measured from the blanks except for strontium-90 in one source solution and one equipment blank collected in September and October 2004, respectively. The detectable concentrations of strontium-90 in the blanks probably were from a small source of strontium-90 contamination or large measurement variability, or both. Order statistics and the binomial probability distribution were used to estimate the magnitude and extent of any potential contamination bias of tritium, strontium-90, cesium-137, sodium, chloride, sulfate, dissolved chromium, and iodide in water-quality samples. These statistical methods indicated that, with (1) 87 percent confidence, contamination bias of cesium-137 and sodium in 60 percent of water-quality samples was less than the minimum detectable concentration or reporting level; (2) 92‒94 percent confidence, contamination bias of tritium, strontium-90, chloride, sulfate, and dissolved chromium in 70 percent of water-quality samples was less than the minimum detectable concentration or reporting level; and (3) 75 percent confidence, contamination bias of iodide in 50 percent of water-quality samples was less than the reporting level for iodide. These results support the conclusion that contamination bias of water-quality samples from sample processing, storage, shipping, and analysis was insignificant and that cross-contamination of perched groundwater samples collected with bailers during 2002–08 was insignificant.

  6. The Effect of Small Sample Size on Two-Level Model Estimates: A Review and Illustration

    ERIC Educational Resources Information Center

    McNeish, Daniel M.; Stapleton, Laura M.

    2016-01-01

    Multilevel models are an increasingly popular method to analyze data that originate from a clustered or hierarchical structure. To effectively utilize multilevel models, one must have an adequately large number of clusters; otherwise, some model parameters will be estimated with bias. The goals for this paper are to (1) raise awareness of the…

  7. Entropy of hydrological systems under small samples: Uncertainty and variability

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Wang, Yuankun; Wu, Jichun; Singh, Vijay P.; Zeng, Xiankui; Wang, Lachun; Chen, Yuanfang; Chen, Xi; Zhang, Liyuan; Gu, Shenghua

    2016-01-01

    Entropy theory has been increasingly applied in hydrology in both descriptive and inferential ways. However, little attention has been given to the small-sample condition widespread in hydrological practice, where either hydrological measurements are limited or are even nonexistent. Accordingly, entropy estimated under this condition may incur considerable bias. In this study, small-sample condition is considered and two innovative entropy estimators, the Chao-Shen (CS) estimator and the James-Stein-type shrinkage (JSS) estimator, are introduced. Simulation tests are conducted with common distributions in hydrology, that lead to the best-performing JSS estimator. Then, multi-scale moving entropy-based hydrological analyses (MM-EHA) are applied to indicate the changing patterns of uncertainty of streamflow data collected from the Yangtze River and the Yellow River, China. For further investigation into the intrinsic property of entropy applied in hydrological uncertainty analyses, correlations of entropy and other statistics at different time-scales are also calculated, which show connections between the concept of uncertainty and variability.

  8. Testing the effect of the rock record on diversity: a multidisciplinary approach to elucidating the generic richness of sauropodomorph dinosaurs through time.

    PubMed

    Mannion, Philip D; Upchurch, Paul; Carrano, Matthew T; Barrett, Paul M

    2011-02-01

    The accurate reconstruction of palaeobiodiversity patterns is central to a detailed understanding of the macroevolutionary history of a group of organisms. However, there is increasing evidence that diversity patterns observed directly from the fossil record are strongly influenced by fluctuations in the quality of our sampling of the rock record; thus, any patterns we see may reflect sampling biases, rather than genuine biological signals. Previous dinosaur diversity studies have suggested that fluctuations in sauropodomorph palaeobiodiversity reflect genuine biological signals, in comparison to theropods and ornithischians whose diversity seems to be largely controlled by the rock record. Most previous diversity analyses that have attempted to take into account the effects of sampling biases have used only a single method or proxy: here we use a number of techniques in order to elucidate diversity. A global database of all known sauropodomorph body fossil occurrences (2024) was constructed. A taxic diversity curve for all valid sauropodomorph genera was extracted from this database and compared statistically with several sampling proxies (rock outcrop area and dinosaur-bearing formations and collections), each of which captures a different aspect of fossil record sampling. Phylogenetic diversity estimates, residuals and sample-based rarefaction (including the first attempt to capture 'cryptic' diversity in dinosaurs) were implemented to investigate further the effects of sampling. After 'removal' of biases, sauropodomorph diversity appears to be genuinely high in the Norian, Pliensbachian-Toarcian, Bathonian-Callovian and Kimmeridgian-Tithonian (with a small peak in the Aptian), whereas low diversity levels are recorded for the Oxfordian and Berriasian-Barremian, with the Jurassic/Cretaceous boundary seemingly representing a real diversity trough. Observed diversity in the remaining Triassic-Jurassic stages appears to be largely driven by sampling effort. Late Cretaceous diversity is difficult to elucidate and it is possible that this interval remains relatively under-sampled. Despite its distortion by sampling biases, much of sauropodomorph palaeobiodiversity can be interpreted as a reflection of genuine biological signals, and fluctuations in sea level may account for some of these diversity patterns. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.

  9. Dynamic Time Expansion and Compression Using Nonlinear Waveguides

    DOEpatents

    Findikoglu, Alp T.; Hahn, Sangkoo F.; Jia, Quanxi

    2004-06-22

    Dynamic time expansion or compression of a small amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.

  10. Dynamic time expansion and compression using nonlinear waveguides

    DOEpatents

    Findikoglu, Alp T [Los Alamos, NM; Hahn, Sangkoo F [Los Alamos, NM; Jia, Quanxi [Los Alamos, NM

    2004-06-22

    Dynamic time expansion or compression of a small-amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small-amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.

  11. Inadequacy of internal covariance estimation for super-sample covariance

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Kunz, Martin

    2017-08-01

    We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.

  12. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    PubMed

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  13. The Discovery of Single-Nucleotide Polymorphisms—and Inferences about Human Demographic History

    PubMed Central

    Wakeley, John; Nielsen, Rasmus; Liu-Cordero, Shau Neen; Ardlie, Kristin

    2001-01-01

    A method of historical inference that accounts for ascertainment bias is developed and applied to single-nucleotide polymorphism (SNP) data in humans. The data consist of 84 short fragments of the genome that were selected, from three recent SNP surveys, to contain at least two polymorphisms in their respective ascertainment samples and that were then fully resequenced in 47 globally distributed individuals. Ascertainment bias is the deviation, from what would be observed in a random sample, caused either by discovery of polymorphisms in small samples or by locus selection based on levels or patterns of polymorphism. The three SNP surveys from which the present data were derived differ both in their protocols for ascertainment and in the size of the samples used for discovery. We implemented a Monte Carlo maximum-likelihood method to fit a subdivided-population model that includes a possible change in effective size at some time in the past. Incorrectly assuming that ascertainment bias does not exist causes errors in inference, affecting both estimates of migration rates and historical changes in size. Migration rates are overestimated when ascertainment bias is ignored. However, the direction of error in inferences about changes in effective population size (whether the population is inferred to be shrinking or growing) depends on whether either the numbers of SNPs per fragment or the SNP-allele frequencies are analyzed. We use the abbreviation “SDL,” for “SNP-discovered locus,” in recognition of the genomic-discovery context of SNPs. When ascertainment bias is modeled fully, both the number of SNPs per SDL and their allele frequencies support a scenario of growth in effective size in the context of a subdivided population. If subdivision is ignored, however, the hypothesis of constant effective population size cannot be rejected. An important conclusion of this work is that, in demographic or other studies, SNP data are useful only to the extent that their ascertainment can be modeled. PMID:11704929

  14. The topology of large-scale structure. V - Two-dimensional topology of sky maps

    NASA Astrophysics Data System (ADS)

    Gott, J. R., III; Mao, Shude; Park, Changbom; Lahav, Ofer

    1992-01-01

    A 2D algorithm is applied to observed sky maps and numerical simulations. It is found that when topology is studied on smoothing scales larger than the correlation length, the topology is approximately in agreement with the random phase formula for the 2D genus-threshold density relation, G2(nu) varies as nu(e) exp-nu-squared/2. Some samples show small 'meatball shifts' similar to those seen in corresponding 3D observational samples and similar to those produced by biasing in cold dark matter simulations. The observational results are thus consistent with the standard model in which the structure in the universe today has grown from small fluctuations caused by random quantum noise in the early universe.

  15. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    NASA Astrophysics Data System (ADS)

    Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.

    2018-06-01

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.

  16. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    DOE PAGES

    Davis, C.; Rozo, E.; Roodman, A.; ...

    2018-03-26

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less

  17. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, C.; Rozo, E.; Roodman, A.

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less

  18. Distributed force probe bending model of critical dimension atomic force microscopy bias

    NASA Astrophysics Data System (ADS)

    Ukraintsev, Vladimir A.; Orji, Ndubuisi G.; Vorburger, Theodore V.; Dixson, Ronald G.; Fu, Joseph; Silver, Rick M.

    2013-04-01

    Critical dimension atomic force microscopy (CD-AFM) is a widely used reference metrology technique. To characterize modern semiconductor devices, small and flexible probes, often 15 to 20 nm in diameter, are used. Recent studies have reported uncontrolled and significant probe-to-probe bias variation during linewidth and sidewall angle measurements. To understand the source of these variations, tip-sample interactions between high aspect ratio features and small flexible probes, and their influence on measurement bias, should be carefully studied. Using theoretical and experimental procedures, one-dimensional (1-D) and two-dimensional (2-D) models of cylindrical probe bending relevant to carbon nanotube (CNT) AFM probes were developed and tested. An earlier 1-D bending model was refined, and a new 2-D distributed force (DF) model was developed. Contributions from several factors were considered, including: probe misalignment, CNT tip apex diameter variation, probe bending before snapping, and distributed van der Waals-London force. A method for extracting Hamaker probe-surface interaction energy from experimental probe-bending data was developed. Comparison of the new 2-D model with 1-D single point force (SPF) model revealed a difference of about 28% in probe bending. A simple linear relation between biases predicted by the 1-D SPF and 2-D DF models was found. The results suggest that probe bending can be on the order of several nanometers and can partially explain the observed CD-AFM probe-to-probe variation. New 2-D and three-dimensional CD-AFM data analysis software is needed to take full advantage of the new bias correction modeling capabilities.

  19. The small-x gluon distribution in centrality biased pA and pp collisions

    NASA Astrophysics Data System (ADS)

    Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir

    2018-06-01

    The nuclear modification factor RpA (pT) provides information on the small-x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small-x gluons. We find that the biased nuclear modification factor QpA (pT) for central collisions is above RpA (pT) for minimum bias events, and that it may redevelop a "Cronin peak" even at small x. The magnitude of the peak is predicted to increase approximately like 1 /A⊥ ν, ν ∼ 0.6 ± 0.1, if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A⊥. We predict an enhanced Qpp (pT) - 1 ∼ 1 /(pT2) ν and a Cronin peak even for central pp collisions.

  20. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  1. Biased Brownian dynamics for rate constant calculation.

    PubMed

    Zou, G; Skeel, R D; Subramaniam, S

    2000-08-01

    An enhanced sampling method-biased Brownian dynamics-is developed for the calculation of diffusion-limited biomolecular association reaction rates with high energy or entropy barriers. Biased Brownian dynamics introduces a biasing force in addition to the electrostatic force between the reactants, and it associates a probability weight with each trajectory. A simulation loses weight when movement is along the biasing force and gains weight when movement is against the biasing force. The sampling of trajectories is then biased, but the sampling is unbiased when the trajectory outcomes are multiplied by their weights. With a suitable choice of the biasing force, more reacted trajectories are sampled. As a consequence, the variance of the estimate is reduced. In our test case, biased Brownian dynamics gives a sevenfold improvement in central processing unit (CPU) time with the choice of a simple centripetal biasing force.

  2. Accounting for imperfect detection and survey bias in statistical analysis of presence-only data

    USGS Publications Warehouse

    Dorazio, Robert M.

    2014-01-01

    Using mathematical proof and simulation-based comparisons, I demonstrate that biases induced by errors in detection or biased selection of survey locations can be reduced or eliminated by using the hierarchical model to analyse presence-only data in conjunction with counts observed in planned surveys. I show that a relatively small number of high-quality data (from planned surveys) can be used to leverage the information in presence-only observations, which usually have broad spatial coverage but may not be informative of both occurrence and detectability of individuals. Because a variety of sampling protocols can be used in planned surveys, this approach to the analysis of presence-only data is widely applicable. In addition, since the point-process model is formulated at the level of an individual, it can be extended to account for biological interactions between individuals and temporal changes in their spatial distributions.

  3. Adaptively biased molecular dynamics for free energy calculations

    NASA Astrophysics Data System (ADS)

    Babin, Volodymyr; Roland, Christopher; Sagui, Celeste

    2008-04-01

    We present an adaptively biased molecular dynamics (ABMD) method for the computation of the free energy surface of a reaction coordinate using nonequilibrium dynamics. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential and is inspired by the metadynamics method. The ABMD method has several useful features, including a small number of control parameters and an O(t ) numerical cost with molecular dynamics time t. The ABMD method naturally allows for extensions based on multiple walkers and replica exchange, where different replicas can have different temperatures and/or collective variables. This is beneficial not only in terms of the speed and accuracy of a calculation, but also in terms of the amount of useful information that may be obtained from a given simulation. The workings of the ABMD method are illustrated via a study of the folding of the Ace-GGPGGG-Nme peptide in a gaseous and solvated environment.

  4. LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies

    PubMed Central

    Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.

    2015-01-01

    Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630

  5. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles.

    PubMed

    Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S

    2010-02-24

    Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.

  6. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles

    PubMed Central

    2010-01-01

    Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. PMID:20181233

  7. Using GEO Optical Observations to Infer Orbit Populations

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Africano, John

    2002-01-01

    NASA's Orbital Debris measurements program has a goal to characterize the small debris environment in the geosynchronous Earth-orbit (GEO) region using optical telescopes ("small" refers to objects too small to catalog and track with current systems). Traditionally, observations of GEO and near-GEO objects involve following the object with the telescope long enough to obtain an orbit. When observing very dim objects with small field-of-view telescopes, though, the observations are generally too short to obtain accurate orbital elements. However, it is possible to use such observations to statistically characterize the small object environment. A telescope pointed at a particular spot could potentially see objects in a number of different orbits. Inevitably, when looking at one region for certain types of orbits, there are objects in other types of orbits that cannot be seen. Observation campaigns are designed with these limitations in mind and are set up to span a number of regions of the sky, making it possible to sample all potential orbits under consideration. Each orbit is not seen with the same probability, however, so there are observation biases intrinsic to any observation campaign. Fortunately, it is possible to remove such biases and reconstruct a meaningful estimate of the statistical orbit populations of small objects in GEO. This information, in turn, can be used to investigate the nature of debris sources and to characterize the risk to GEO spacecraft. This paper describes these statistical tools and presents estimates of small object GEO populations.

  8. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    PubMed

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mainzer, A.; Masiero, J.; Hand, E.

    The NEOWISE data set offers the opportunity to study the variations in albedo for asteroid classification schemes based on visible and near-infrared observations for a large sample of minor planets. We have determined the albedos for nearly 1900 asteroids classified by the Tholen, Bus, and Bus-DeMeo taxonomic classification schemes. We find that the S-complex spans a broad range of bright albedos, partially overlapping the low albedo C-complex at small sizes. As expected, the X-complex covers a wide range of albedos. The multiwavelength infrared coverage provided by NEOWISE allows determination of the reflectivity at 3.4 and 4.6 {mu}m relative to themore » visible albedo. The direct computation of the reflectivity at 3.4 and 4.6 {mu}m enables a new means of comparing the various taxonomic classes. Although C, B, D, and T asteroids all have similarly low visible albedos, the D and T types can be distinguished from the C and B types by examining their relative reflectance at 3.4 and 4.6 {mu}m. All of the albedo distributions are strongly affected by selection biases against small, low albedo objects, as all objects selected for taxonomic classification were chosen according to their visible light brightness. Due to these strong selection biases, we are unable to determine whether or not there are correlations between size, albedo, and space weathering. We argue that the current set of classified asteroids makes any such correlations difficult to verify. A sample of taxonomically classified asteroids drawn without significant albedo bias is needed in order to perform such an analysis.« less

  10. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2013-01-01

    Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.

  11. Gini estimation under infinite variance

    NASA Astrophysics Data System (ADS)

    Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale

    2018-07-01

    We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.

  12. Rational learning and information sampling: on the "naivety" assumption in sampling explanations of judgment biases.

    PubMed

    Le Mens, Gaël; Denrell, Jerker

    2011-04-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them. Here, we show that this "naivety" assumption is not necessary. Systematically biased judgments can emerge even when decision makers process available information perfectly and are also aware of how the information sample has been generated. Specifically, we develop a rational analysis of Denrell's (2005) experience sampling model, and we prove that when information search is interested rather than disinterested, even rational information sampling and processing can give rise to systematic patterns of errors in judgments. Our results illustrate that a tendency to favor alternatives for which outcome information is more accessible can be consistent with rational behavior. The model offers a rational explanation for behaviors that had previously been attributed to cognitive and motivational biases, such as the in-group bias or the tendency to prefer popular alternatives. 2011 APA, all rights reserved

  13. An experimental verification of laser-velocimeter sampling bias and its correction

    NASA Technical Reports Server (NTRS)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  14. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models

    PubMed Central

    Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.

    2014-01-01

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786

  15. The small-x gluon distribution in centrality biased pA and pp collisions

    DOE PAGES

    Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir

    2018-04-04

    Here, the nuclear modification factor R pA(p T) provides information on the small- x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small- x gluons. We find that the biased nuclear modification factor Q pA(p T) for central collisions is above R pA(p T) formore » minimum bias events, and that it may redevelop a “Cronin peak” even at small x . The magnitude of the peak is predicted to increase approximately like 1/A ⊥ ν, ν~0.6±0.1 , if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A ⊥. We predict an enhanced Q pp(p T)–1~1/(p T 2) ν and a Cronin peak even for central pp collisions.« less

  16. The small-x gluon distribution in centrality biased pA and pp collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir

    Here, the nuclear modification factor R pA(p T) provides information on the small- x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small- x gluons. We find that the biased nuclear modification factor Q pA(p T) for central collisions is above R pA(p T) formore » minimum bias events, and that it may redevelop a “Cronin peak” even at small x . The magnitude of the peak is predicted to increase approximately like 1/A ⊥ ν, ν~0.6±0.1 , if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A ⊥. We predict an enhanced Q pp(p T)–1~1/(p T 2) ν and a Cronin peak even for central pp collisions.« less

  17. Mutation Bias Favors Protein Folding Stability in the Evolution of Small Populations

    PubMed Central

    Porto, Markus; Bastolla, Ugo

    2010-01-01

    Mutation bias in prokaryotes varies from extreme adenine and thymine (AT) in obligatory endosymbiotic or parasitic bacteria to extreme guanine and cytosine (GC), for instance in actinobacteria. GC mutation bias deeply influences the folding stability of proteins, making proteins on the average less hydrophobic and therefore less stable with respect to unfolding but also less susceptible to misfolding and aggregation. We study a model where proteins evolve subject to selection for folding stability under given mutation bias, population size, and neutrality. We find a non-neutral regime where, for any given population size, there is an optimal mutation bias that maximizes fitness. Interestingly, this optimal GC usage is small for small populations, large for intermediate populations and around 50% for large populations. This result is robust with respect to the definition of the fitness function and to the protein structures studied. Our model suggests that small populations evolving with small GC usage eventually accumulate a significant selective advantage over populations evolving without this bias. This provides a possible explanation to the observation that most species adopting obligatory intracellular lifestyles with a consequent reduction of effective population size shifted their mutation spectrum towards AT. The model also predicts that large GC usage is optimal for intermediate population size. To test these predictions we estimated the effective population sizes of bacterial species using the optimal codon usage coefficients computed by dos Reis et al. and the synonymous to non-synonymous substitution ratio computed by Daubin and Moran. We found that the population sizes estimated in these ways are significantly smaller for species with small and large GC usage compared to species with no bias, which supports our prediction. PMID:20463869

  18. Reliability of the Pictorial Scale of Perceived Movement Skill Competence in 2 Diverse Samples of Young Children.

    PubMed

    Barnett, Lisa M; Robinson, Leah E; Webster, E Kipling; Ridgers, Nicola D

    2015-08-01

    The purpose was to determine the reliability of an instrument designed to assess young children's perceived movement skill competence in 2 diverse samples. A pictorial instrument assessed 12 perceived Fundamental Movement Skills (FMS) based on the Test of Gross Motor Development 2nd edition. Intra-Class Correlations (ICC) and internal consistency analyses were conducted. Paired sample t tests assessed change in mean perceived skill scores. Bivariate correlations between the intertrial difference and the mean of the trials explored proportional bias. Sample 1 (S1) were culturally diverse Australian children (n = 111; 52% boys) aged 5 to 8 years (mean = 6.4, SD = 1.0) with educated parents. Sample 2 (S2) were racially diverse and socioeconomically disadvantaged American children (n = 110; 57% boys) aged 5 to 10 years (mean = 6.8, SD = 1.1). For all children, the internal consistency for 12 FMS was acceptable (S1 = 0.72, 0.75, S2 = 0.66, 0.67). ICCs were higher in S1 (0.73) than S2 (0.50). Mean changes between trials were small. There was little evidence of proportional bias. Lower values in S2 may be due to differences in study demographic and execution. While the instrument demonstrated reliability/internal consistency, further work is recommended in diverse samples.

  19. Electroacupuncture for Tinnitus: A Systematic Review

    PubMed Central

    Liu, Yang; Zhong, Juan; Jiang, Luyun; Liu, Ying; Chen, Qing; Xie, Yan; Zhang, Qinxiu

    2016-01-01

    Background Treatment effects of electroacupuncture for patients with subjective tinnitus has yet to be clarified. Objectives To assess the effect of electroacupuncutre for alleviating the symptoms of subjective tinnitus. Methods Extensive literature searches were carried out in three English and four Chinese databases (PubMed, EMBASE, Cochrane Library, CNKI, Wanfang Chinese Digital Periodical and Conference Database, VIP, and ChiCTR).The date of the most recent search was 1 June 2014. Randomized controlled trials (RCTs) or quasi-RCTs were included. The titles, abstracts, and keywords of all records were reviewed by two authors independently. The data were collected and extracted by three authors. The risk of bias in the trials was assessed in accordance with the Cochrane Handbook, version 5.1.0. (http://www.handbook.cochrane.org). Eighty-nine studies were retrieved. After discarding 84 articles, five studies with 322 participants were identified. Assessment of the methodological quality of the studies identified weaknesses in all five studies. All studies were judged as having a high risk of selection and performance bias. The attrition bias was high in four studies. Incompleteness bias was low in all studies. Reporting bias was unclear in all studies. Because of the limited number of trials included and the various types of interventions and outcomes, we were unable to conduct pooled analyses. Conclusions Due to the poor methodological quality of the primary studies and the small sample sizes, no convincing evidence that electroacupuncture is beneficial for treating tinnitus could be found. There is an urgent need for more high-quality trials with large sample sizes for the investigation of electroacupuncture treatment for tinnitus. PMID:26938213

  20. Linear and non-linear bias: predictions versus measurements

    NASA Astrophysics Data System (ADS)

    Hoffmann, K.; Bel, J.; Gaztañaga, E.

    2017-02-01

    We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Associating galaxies with dark matter haloes in the Marenostrum Institut de Ciències de l'Espai (MICE) Grand Challenge N-body simulation, we directly measure the bias parameters by comparing the smoothed density fluctuations of haloes and matter in the same region at different positions as a function of smoothing scale. Alternatively, we measure the bias parameters by matching the probability distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous papers using the same data. We find an overall variation of the linear bias measurements and predictions of ˜5 per cent with respect to results from two-point correlations for different halo samples with masses between ˜1012and1015 h-1 M⊙ at the redshifts z = 0.0 and 0.5. Variations between the second- and third-order bias parameters from the different methods show larger variations, but with consistent trends in mass and redshift. The various bias measurements reveal a tight relation between the linear and the quadratic bias parameters, which is consistent with results from the literature based on simulations with different cosmologies. Such a universal relation might improve constraints on cosmological models, derived from second-order clustering statistics at small scales or higher order clustering statistics.

  1. Quality of nutrient data from streams and ground water sampled during water years 1992-2001

    USGS Publications Warehouse

    Mueller, David K.; Titus, Cindy J.

    2005-01-01

    Proper interpretation of water-quality data requires consideration of the effects that bias and variability might have on measured constituent concentrations. In this report, methods are described to estimate the bias due to contamination of samples in the field or laboratory and the variability due to sample collection, processing, shipment, and analysis. Contamination can adversely affect interpretation of measured concentrations in comparison to standards or criteria. Variability can affect interpretation of small differences between individual measurements or mean concentrations. Contamination and variability are determined for nutrient data from quality-control samples (field blanks and replicates) collected as part of the National Water-Quality Assessment (NAWQA) Program during water years 1992-2001. Statistical methods are used to estimate the likelihood of contamination and variability in all samples. Results are presented for five nutrient analytes from stream samples and four nutrient analytes from ground-water samples. Ammonia contamination can add at least 0.04 milligram per liter in up to 5 percent of all samples. This could account for more than 22 percent of measured concentrations at the low range of aquatic-life criteria (0.18 milligram per liter). Orthophosphate contamination, at least 0.019 milligram per liter in up to 5 percent of all samples, could account for more than 38 percent of measured concentrations at the limit to avoid eutrophication (0.05 milligram per liter). Nitrite-plus-nitrate and Kjeldahl nitrogen contamination is less than 0.4 milligram per liter in 99 percent of all samples; thus there is no significant effect on measured concentrations of environmental significance. Sampling variability has little or no effect on reported concentrations of ammonia, nitrite-plus-nitrate, orthophosphate, or total phosphorus sampled after 1998. The potential errors due to sampling variability are greater for the Kjeldahl nitrogen analytes and for total phosphorus sampled before 1999. The uncertainty in a mean of 10 concentrations caused by sampling variability is within a small range (1 to 7 percent) for all nutrients. These results can be applied to interpretation of environmental data collected during water years 1992-2001 in 52 NAWQA study units.

  2. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d + = 0.38), a small-to-medium reduction in beauty ideal internalisation (d + = -0.37), and a large reduction in social comparison tendencies (d + = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  3. Towards fast, rigorous and efficient conformational sampling of biomolecules: Advances in accelerated molecular dynamics.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2015-05-01

    Accelerated molecular dynamics (aMD) has been proven to be a powerful biasing method for enhanced sampling of biomolecular conformations on general-purpose computational platforms. Biologically important long timescale events that are beyond the reach of standard molecular dynamics can be accessed without losing the detailed atomistic description of the system in aMD. Over other biasing methods, aMD offers the advantages of tuning the level of acceleration to access the desired timescale without any advance knowledge of the reaction coordinate. Recent advances in the implementation of aMD and its applications to small peptides and biological macromolecules are reviewed here along with a brief account of all the aMD variants introduced in the last decade. In comparison to the original implementation of aMD, the recent variant in which all the rotatable dihedral angles are accelerated (RaMD) exhibits faster convergence rates and significant improvement in statistical accuracy of retrieved thermodynamic properties. RaMD in conjunction with accelerating diffusive degrees of freedom, i.e. dual boosting, has been rigorously tested for the most difficult conformational sampling problem, protein folding. It has been shown that RaMD with dual boosting is capable of efficiently sampling multiple folding and unfolding events in small fast folding proteins. RaMD with the dual boost approach opens exciting possibilities for sampling multiple timescales in biomolecules. While equilibrium properties can be recovered satisfactorily from aMD-based methods, directly obtaining dynamics and kinetic rates for larger systems presents a future challenge. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Attrition Bias in Panel Data: A Sheep in Wolf's Clothing? A Case Study Based on the Mabel Survey.

    PubMed

    Cheng, Terence C; Trivedi, Pravin K

    2015-09-01

    This paper investigates the nature and consequences of sample attrition in a unique longitudinal survey of medical doctors. We describe the patterns of non-response and examine if attrition affects the econometric analysis of medical labour market outcomes using the estimation of physician earnings equations as a case study. We compare the econometric gestimates obtained from a number of different modelling strategies, which are as follows: balanced versus unbalanced samples; an attrition model for panel data based on the classic sample selection model; and a recently developed copula-based selection model. Descriptive evidence shows that doctors who work longer hours, have lower years of experience, are overseas trained and have changed their work location are more likely to drop out. Our analysis suggests that the impact of attrition on inference about the earnings of general practitioners is small. For specialists, there appears to be some evidence for an economically significant bias. Finally, we discuss how the top-up samples in the Medicine in Australia: Balancing Employment and Life survey can be used to address the problem of panel attrition. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Can Unmanned Aerial Systems (Drones) Be Used for the Routine Transport of Chemistry, Hematology, and Coagulation Laboratory Specimens?

    PubMed

    Amukele, Timothy K; Sokoll, Lori J; Pepper, Daniel; Howard, Dana P; Street, Jeff

    2015-01-01

    Unmanned Aerial Systems (UAS or drones) could potentially be used for the routine transport of small goods such as diagnostic clinical laboratory specimens. To the best of our knowledge, there is no published study of the impact of UAS transportation on laboratory tests. Three paired samples were obtained from each one of 56 adult volunteers in a single phlebotomy event (336 samples total): two tubes each for chemistry, hematology, and coagulation testing respectively. 168 samples were driven to the flight field and held stationary. The other 168 samples were flown in the UAS for a range of times, from 6 to 38 minutes. After the flight, 33 of the most common chemistry, hematology, and coagulation tests were performed. Statistical methods as well as performance criteria from four distinct clinical, academic, and regulatory bodies were used to evaluate the results. Results from flown and stationary sample pairs were similar for all 33 analytes. Bias and intercepts were <10% and <13% respectively for all analytes. Bland-Altman comparisons showed a mean difference of 3.2% for Glucose and <1% for other analytes. Only bicarbonate did not meet the strictest (Royal College of Pathologists of Australasia Quality Assurance Program) performance criteria. This was due to poor precision rather than bias. There were no systematic differences between laboratory-derived (analytic) CV's and the CV's of our flown versus terrestrial sample pairs however CV's from the sample pairs tended to be slightly higher than analytic CV's. The overall concordance, based on clinical stratification (normal versus abnormal), was 97%. Length of flight had no impact on the results. Transportation of laboratory specimens via small UASs does not affect the accuracy of routine chemistry, hematology, and coagulation tests results from selfsame samples. However it results in slightly poorer precision for some analytes.

  6. Human immunophenotyping via low-variance, low-bias, interpretive regression modeling of small, wide data sets: Application to aging and immune response to influenza vaccination.

    PubMed

    Holmes, Tyson H; He, Xiao-Song

    2016-10-01

    Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n,1

  7. Human Immunophenotyping via Low-Variance, Low-Bias, Interpretive Regression Modeling of Small, Wide Data Sets: Application to Aging and Immune Response to Influenza Vaccination

    PubMed Central

    Holmes, Tyson H.; He, Xiao-Song

    2016-01-01

    Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n, 1 < n < 50, of human participants for the purpose of estimating many parameters p, such that n < p < 1,000. We offer a set of prescriptions that are designed to facilitate low-variance (i.e. stable), low-bias, interpretive regression modeling of small, wide data sets. These prescriptions are distinctive in their especially heavy emphasis on minimizing use of out-of-sample information for conducting statistical inference. That allows the working immunologist to proceed without being encumbered by imposed and often untestable statistical assumptions. Problems of unmeasured confounders, confidence-interval coverage, feature selection, and shrinkage/denoising are defined clearly and treated in detail. We propose an extension of an existing nonparametric technique for improved small-sample confidence-interval tail coverage from the univariate case (single immune feature) to the multivariate (many, possibly correlated immune features). An important role for derived features in the immunological interpretation of regression analyses is stressed. Areas of further research are discussed. Presented principles and methods are illustrated through application to a small, wide data set of adults spanning a wide range in ages and multiple immunophenotypes that were assayed before and after immunization with inactivated influenza vaccine (IIV). Our regression modeling prescriptions identify some potentially important topics for future immunological research. 1) Immunologists may wish to distinguish age-related differences in immune features from changes in immune features caused by aging. 2) A form of the bootstrap that employs linear extrapolation may prove to be an invaluable analytic tool because it allows the working immunologist to obtain accurate estimates of the stability of immune parameter estimates with a bare minimum of imposed assumptions. 3) Liberal inclusion of immune features in phenotyping panels can facilitate accurate separation of biological signal of interest from noise. In addition, through a combination of denoising and potentially improved confidence interval coverage, we identify some candidate immune correlates (frequency of cell subset and concentration of cytokine) with B cell response as measured by quantity of IIV-specific IgA antibody-secreting cells and quantity of IIV-specific IgG antibody-secreting cells. PMID:27196789

  8. PRIMUS: Galaxy clustering as a function of luminosity and color at 0.2 < z < 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skibba, Ramin A.; Smith, M. Stephen M.; Coil, Alison L.

    2014-04-01

    We present measurements of the luminosity and color-dependence of galaxy clustering at 0.2 < z < 1.0 in the Prism Multi-object Survey. We quantify the clustering with the redshift-space and projected two-point correlation functions, ξ(r{sub p} , π) and w{sub p} (r{sub p} ), using volume-limited samples constructed from a parent sample of over ∼130, 000 galaxies with robust redshifts in seven independent fields covering 9 deg{sup 2} of sky. We quantify how the scale-dependent clustering amplitude increases with increasing luminosity and redder color, with relatively small errors over large volumes. We find that red galaxies have stronger small-scale (0.1more » Mpc h {sup –1} < r{sub p} < 1 Mpc h {sup –1}) clustering and steeper correlation functions compared to blue galaxies, as well as a strong color dependent clustering within the red sequence alone. We interpret our measured clustering trends in terms of galaxy bias and obtain values of b {sub gal} ≈ 0.9-2.5, quantifying how galaxies are biased tracers of dark matter depending on their luminosity and color. We also interpret the color dependence with mock catalogs, and find that the clustering of blue galaxies is nearly constant with color, while redder galaxies have stronger clustering in the one-halo term due to a higher satellite galaxy fraction. In addition, we measure the evolution of the clustering strength and bias, and we do not detect statistically significant departures from passive evolution. We argue that the luminosity- and color-environment (or halo mass) relations of galaxies have not significantly evolved since z ∼ 1. Finally, using jackknife subsampling methods, we find that sampling fluctuations are important and that the COSMOS field is generally an outlier, due to having more overdense structures than other fields; we find that 'cosmic variance' can be a significant source of uncertainty for high-redshift clustering measurements.« less

  9. PRIMUS: Galaxy Clustering as a Function of Luminosity and Color at 0.2 < z < 1

    NASA Astrophysics Data System (ADS)

    Skibba, Ramin A.; Smith, M. Stephen M.; Coil, Alison L.; Moustakas, John; Aird, James; Blanton, Michael R.; Bray, Aaron D.; Cool, Richard J.; Eisenstein, Daniel J.; Mendez, Alexander J.; Wong, Kenneth C.; Zhu, Guangtun

    2014-04-01

    We present measurements of the luminosity and color-dependence of galaxy clustering at 0.2 < z < 1.0 in the Prism Multi-object Survey. We quantify the clustering with the redshift-space and projected two-point correlation functions, ξ(rp , π) and wp (rp ), using volume-limited samples constructed from a parent sample of over ~130, 000 galaxies with robust redshifts in seven independent fields covering 9 deg2 of sky. We quantify how the scale-dependent clustering amplitude increases with increasing luminosity and redder color, with relatively small errors over large volumes. We find that red galaxies have stronger small-scale (0.1 Mpc h -1 < rp < 1 Mpc h -1) clustering and steeper correlation functions compared to blue galaxies, as well as a strong color dependent clustering within the red sequence alone. We interpret our measured clustering trends in terms of galaxy bias and obtain values of b gal ≈ 0.9-2.5, quantifying how galaxies are biased tracers of dark matter depending on their luminosity and color. We also interpret the color dependence with mock catalogs, and find that the clustering of blue galaxies is nearly constant with color, while redder galaxies have stronger clustering in the one-halo term due to a higher satellite galaxy fraction. In addition, we measure the evolution of the clustering strength and bias, and we do not detect statistically significant departures from passive evolution. We argue that the luminosity- and color-environment (or halo mass) relations of galaxies have not significantly evolved since z ~ 1. Finally, using jackknife subsampling methods, we find that sampling fluctuations are important and that the COSMOS field is generally an outlier, due to having more overdense structures than other fields; we find that "cosmic variance" can be a significant source of uncertainty for high-redshift clustering measurements.

  10. Sampling for Global Epidemic Models and the Topology of an International Airport Network

    PubMed Central

    Bobashev, Georgiy; Morris, Robert J.; Goedecke, D. Michael

    2008-01-01

    Mathematical models that describe the global spread of infectious diseases such as influenza, severe acute respiratory syndrome (SARS), and tuberculosis (TB) often consider a sample of international airports as a network supporting disease spread. However, there is no consensus on how many cities should be selected or on how to select those cities. Using airport flight data that commercial airlines reported to the Official Airline Guide (OAG) in 2000, we have examined the network characteristics of network samples obtained under different selection rules. In addition, we have examined different size samples based on largest flight volume and largest metropolitan populations. We have shown that although the bias in network characteristics increases with the reduction of the sample size, a relatively small number of areas that includes the largest airports, the largest cities, the most-connected cities, and the most central cities is enough to describe the dynamics of the global spread of influenza. The analysis suggests that a relatively small number of cities (around 200 or 300 out of almost 3000) can capture enough network information to adequately describe the global spread of a disease such as influenza. Weak traffic flows between small airports can contribute to noise and mask other means of spread such as the ground transportation. PMID:18776932

  11. Weak Lensing by Large-Scale Structure: A Dark Matter Halo Approach.

    PubMed

    Cooray; Hu; Miralda-Escudé

    2000-05-20

    Weak gravitational lensing observations probe the spectrum and evolution of density fluctuations and the cosmological parameters that govern them, but they are currently limited to small fields and subject to selection biases. We show how the expected signal from large-scale structure arises from the contributions from and correlations between individual halos. We determine the convergence power spectrum as a function of the maximum halo mass and so provide the means to interpret results from surveys that lack high-mass halos either through selection criteria or small fields. Since shot noise from rare massive halos is mainly responsible for the sample variance below 10&arcmin;, our method should aid our ability to extract cosmological information from small fields.

  12. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Electrostatic force spectroscopy revealing the degree of reduction of individual graphene oxide sheets.

    PubMed

    Shen, Yue; Wang, Ying; Zhou, Yuan; Hai, Chunxi; Hu, Jun; Zhang, Yi

    2018-01-01

    Electrostatic force spectroscopy (EFS) is a method for monitoring the electrostatic force microscopy (EFM) phase with high resolution as a function of the electrical direct current bias applied either to the probe or sample. Based on the dielectric constant difference of graphene oxide (GO) sheets (reduced using various methods), EFS can be used to characterize the degree of reduction of uniformly reduced one-atom-thick GO sheets at the nanoscale. In this paper, using thermally or chemically reduced individual GO sheets on mica substrates as examples, we characterize their degree of reduction at the nanoscale using EFS. For the reduced graphene oxide (rGO) sheets with a given degree of reduction (sample n), the EFS curve is very close to a parabola within a restricted area. We found that the change in parabola opening direction (or sign the parabola opening value) indicates the onset of reduction on GO sheets. Moreover, the parabola opening value, the peak bias value (tip bias leads to the peak or valley EFM phases) and the EFM phase contrast at a certain tip bias less than the peak value can all indicate the degree of reduction of rGO samples, which is positively correlated with the dielectric constant. In addition, we gave the ranking of degree for reduction on thermally or chemically reduced GO sheets and evaluated the effects of the reducing conditions. The identification of the degree of reduction of GO sheets using EFS is important for reduction strategy optimization and mass application of GO, which is highly desired owing to its mechanical, thermal, optical and electronic applications. Furthermore, as a general and quantitative technique for evaluating the small differences in the dielectric properties of nanomaterials, the EFS technique will extend and facilitate its nanoscale electronic devices applications in the future.

  14. Nestling sex ratio in the Southwestern Willow Flycatcher

    USGS Publications Warehouse

    Paxton, E.H.; Sogge, M.K.; McCarthey, T.D.; Keim, P.

    2002-01-01

    Using molecular-genetic techniques, we determined the gender of 202 Southwestern Willow Flycatcher (Empidonax traillii extimus) nestlings from 95 nests sampled over a five-year period. Overall nestling sex ratio did not vary significantly from 50:50 among years, by clutch order, or by mating strategy (monogamous vs. polygamous pairings). However, we did observe significant differences among the four sites sampled, with sex ratios biased either toward males or females at the different sites. Given the small population sizes and geographic isolation of many of the endangered subspecies' breeding populations, sex-ratio differences may have localized negative impacts. ?? The Cooper Ornithological Society 2002.

  15. Nestling sex ratios in the southwestern willow flycatcher

    USGS Publications Warehouse

    Paxton, E.H.; Sogge, M.K.; McCarthey, Tracy; Keim, Paul

    2002-01-01

    Using molecular-genetic techniques, we determined the gender of 202 Southwestern Willow Flycatcher (Empidonax traillii extimus) nestlings from 95 nests sampled over a five-year period. Overall nestling sex ratio did not vary significantly from 50:50 among years, by clutch order, or by mating strategy (monogamous vs. polygamous pairings). However, we did observe significant differences among the four sites sampled, with sex ratios biased either toward males or females at the different sites. Given the small population sizes and geographic isolation of many of the endangered subspecies' breeding populations, sex-ratio differences may have localized negative impacts.

  16. Social Desirability Bias in Self-Reporting of Hearing Protector Use among Farm Operators

    PubMed Central

    McCullagh, Marjorie C.; Rosemberg, Marie-Anne

    2015-01-01

    Objective: The purposes of this study were (i) to examine the relationship between reported hearing protector use and social desirability bias, and (ii) to compare results of the Marlowe-Crowne social desirability instrument when administered using two different methods (i.e. online and by telephone). Methods: A shortened version of the Marlowe-Crowne social desirability instrument, as well as a self-administered instrument measuring use of hearing protectors, was administered to 497 participants in a study of hearing protector use. The relationship between hearing protector use and social desirability bias was examined using regression analysis. The results of two methods of administration of the Marlowe-Crowne social desirability instrument were compared using t-tests and regression analysis. Results: Reliability (using Cronbach’s alpha) for the shortened seven-item scale for this sample was 0.58. There was no evidence of a relationship between reported hearing protector use and social desirability reporting bias, as measured by the shortened Marlowe-Crowne. The difference in results by method of administration (i.e. online, telephone) was very small. Conclusions: This is the first published study to measure social desirability bias in reporting of hearing protector use among farmers. Findings of this study do not support the presence of social desirability bias in farmers’ reporting of hearing protector use, lending support for the validity of self-report in hearing protector use in this population. PMID:26209595

  17. Evaluation of AUC(0-4) predictive methods for cyclosporine in kidney transplant patients.

    PubMed

    Aoyama, Takahiko; Matsumoto, Yoshiaki; Shimizu, Makiko; Fukuoka, Masamichi; Kimura, Toshimi; Kokubun, Hideya; Yoshida, Kazunari; Yago, Kazuo

    2005-05-01

    Cyclosporine (CyA) is the most commonly used immunosuppressive agent in patients who undergo kidney transplantation. Dosage adjustment of CyA is usually based on trough levels. Recently, trough levels have been replacing the area under the concentration-time curve during the first 4 h after CyA administration (AUC(0-4)). The aim of this study was to compare the predictive values obtained using three different methods of AUC(0-4) monitoring. AUC(0-4) was calculated from 0 to 4 h in early and stable renal transplant patients using the trapezoidal rule. The predicted AUC(0-4) was calculated using three different methods: the multiple regression equation reported by Uchida et al.; Bayesian estimation for modified population pharmacokinetic parameters reported by Yoshida et al.; and modified population pharmacokinetic parameters reported by Cremers et al. The predicted AUC(0-4) was assessed on the basis of predictive bias, precision, and correlation coefficient. The predicted AUC(0-4) values obtained using three methods through measurement of three blood samples showed small differences in predictive bias, precision, and correlation coefficient. In the prediction of AUC(0-4) measurement of one blood sample from stable renal transplant patients, the performance of the regression equation reported by Uchida depended on sampling time. On the other hand, the performance of Bayesian estimation with modified pharmacokinetic parameters reported by Yoshida through measurement of one blood sample, which is not dependent on sampling time, showed a small difference in the correlation coefficient. The prediction of AUC(0-4) using a regression equation required accurate sampling time. In this study, the prediction of AUC(0-4) using Bayesian estimation did not require accurate sampling time in the AUC(0-4) monitoring of CyA. Thus Bayesian estimation is assumed to be clinically useful in the dosage adjustment of CyA.

  18. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.

  19. Attention bias in older women with remitted depression is associated with enhanced amygdala activity and functional connectivity.

    PubMed

    Albert, Kimberly; Gau, Violet; Taylor, Warren D; Newhouse, Paul A

    2017-03-01

    Cognitive bias is a common characteristic of major depressive disorder (MDD) and is posited to remain during remission and contribute to recurrence risk. Attention bias may be related to enhanced amygdala activity or altered amygdala functional connectivity in depression. The current study examined attention bias, brain activity for emotional images, and functional connectivity in post-menopausal women with and without a history of major depression. Attention bias for emotionally valenced images was examined in 33 postmenopausal women with (n=12) and without (n=21) a history of major depression using an emotion dot probe task during fMRI. Group differences in amygdala activity and functional connectivity were assessed using fMRI and examined for correlations to attention performance. Women with a history of MDD showed greater attentional bias for negative images and greater activity in brain areas including the amygdala for both positive and negative images (pcorr <0.001) than women without a history of MDD. In all participants, amygdala activity for negative images was correlated with attention facilitation for emotional images. Women with a history of MDD had significantly greater functional connectivity between the amygdala and hippocampal complex. In all participants amygdala-hippocampal connectivity was positively correlated with attention facilitation for negative images. Small sample with unbalanced groups. These findings provide evidence for negative attentional bias in euthymic, remitted depressed individuals. Activity and functional connectivity in limbic and attention networks may provide a neurobiological basis for continued cognitive bias in remitted depression. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  1. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    PubMed

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  3. Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator

    ERIC Educational Resources Information Center

    Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard

    2011-01-01

    The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…

  4. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy

    PubMed Central

    Monaghan, Kieran A.

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an error variance around an average deviation. Following standardized protocols and assigning precise reference conditions, the error variance of their comparative ratio (test-site:reference) can be measured and used to estimate the accuracy of the resultant assessment. PMID:27392036

  5. The VIMOS Public Extragalactic Redshift Survey (VIPERS). Measuring non-linear galaxy bias at z ~ 0.8

    NASA Astrophysics Data System (ADS)

    Di Porto, C.; Branchini, E.; Bel, J.; Marulli, F.; Bolzonella, M.; Cucciati, O.; de la Torre, S.; Granett, B. R.; Guzzo, L.; Marinoni, C.; Moscardini, L.; Abbas, U.; Adami, C.; Arnouts, S.; Bottini, D.; Cappi, A.; Coupon, J.; Davidzon, I.; De Lucia, G.; Fritz, A.; Franzetti, P.; Fumana, M.; Garilli, B.; Ilbert, O.; Iovino, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; McCracken, H. J.; Paioro, L.; Polletta, M.; Pollo, A.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Burden, A.; Marchetti, A.; Martizzi, D.; Mellier, Y.; Nichol, R. C.; Peacock, J. A.; Percival, W. J.; Viel, M.; Wolk, M.; Zamorani, G.

    2016-10-01

    Aims: We use the first release of the VImos Public Extragalactic Redshift Survey of galaxies (VIPERS) of ~50 000 objects to measure the biasing relation between galaxies and mass in the redshift range z = [ 0.5,1.1 ]. Methods: We estimate the 1-point distribution function [PDF] of VIPERS galaxies from counts in cells and, assuming a model for the mass PDF, we infer their mean bias relation. The reconstruction of the bias relation is performed through a novel method that accounts for Poisson noise, redshift distortions, inhomogeneous sky coverage. and other selection effects. With this procedure we constrain galaxy bias and its deviations from linearity down to scales as small as 4 h-1 Mpc and out to z = 1.1. Results: We detect small (up to 2%) but statistically significant (up to 3σ) deviations from linear bias. The mean biasing function is close to linear in regions above the mean density. The mean slope of the biasing relation is a proxy to the linear bias parameter. This slope increases with luminosity, which is in agreement with results of previous analyses. We detect a strong bias evolution only for z> 0.9, which is in agreement with some, but not all, previous studies. We also detect a significant increase of the bias with the scale, from 4 to 8 h-1 Mpc , now seen for the first time out to z = 1. The amplitude of non-linearity depends on redshift, luminosity, and scale, but no clear trend is detected. Owing to the large cosmic volume probed by VIPERS, we find that the mismatch between the previous estimates of bias at z ~ 1 from zCOSMOS and VVDS-Deep galaxy samples is fully accounted for by cosmic variance. Conclusions: The results of our work confirm the importance of going beyond the over-simplistic linear bias hypothesis showing that non-linearities can be accurately measured through the applications of the appropriate statistical tools to existing datasets like VIPERS. Based on observations collected at the European Southern Observatory, Paranal, Chile, under programmes 182.A-0886 (LP) at the Very Large Telescope, and also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://vipers.inaf.it/

  6. Influence of nano-oxide layer on the giant magnetoresistance and exchange bias of NiMn/Co/Cu/Co spin valve sensors

    NASA Astrophysics Data System (ADS)

    Gupta, Anoop; Mohanan, Senthilnathan; Kinyanjui, Michael; Chuvilin, Andrey; Kaiser, Ute; Herr, Ulrich

    2010-05-01

    NiMn is an interesting material for achieving a high exchange bias in spin valve systems. We investigated the influence of a nano-oxide layer (NOL) inserted in the pinned Co layer on the magnetotransport properties of NiMn/Co/Cu/Co spin valve sensors. The samples were annealed at 350 °C for 10 min to achieve the antiferromagnetic L10 ordered structure of NiMn. The NOL has been characterized by small angle x-ray reflectivity, transmission electron microscopy (TEM), and energy filtered TEM. The inclusion of the NOL leads to an increase in the giant magnetoresistance (GMR) by 20 % indicating a high degree of specular reflection at the NOL. For NOL positions close to the NiMn/Co interface, a decrease in the exchange bias field (Hex) is observed. The best combination of high GMR value and large Hex was found when the NOL was inserted in the center of the pinned Co layer.

  7. Flat-Sky Pseudo-Cls Analysis for Weak Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Asgari, Marika; Taylor, Andy; Joachimi, Benjamin; Kitching, Thomas D.

    2018-05-01

    We investigate the use of estimators of weak lensing power spectra based on a flat-sky implementation of the 'Pseudo-CI' (PCl) technique, where the masked shear field is transformed without regard for masked regions of sky. This masking mixes power, and 'E'-convergence and 'B'-modes. To study the accuracy of forward-modelling and full-sky power spectrum recovery we consider both large-area survey geometries, and small-scale masking due to stars and a checkerboard model for field-of-view gaps. The power spectrum for the large-area survey geometry is sparsely-sampled and highly oscillatory, which makes modelling problematic. Instead, we derive an overall calibration for large-area mask bias using simulated fields. The effects of small-area star masks can be accurately corrected for, while the checkerboard mask has oscillatory and spiky behaviour which leads to percent biases. Apodisation of the masked fields leads to increased biases and a loss of information. We find that we can construct an unbiased forward-model of the raw PCls, and recover the full-sky convergence power to within a few percent accuracy for both Gaussian and lognormal-distributed shear fields. Propagating this through to cosmological parameters using a Fisher-Matrix formalism, we find we can make unbiased estimates of parameters for surveys up to 1,200 deg2 with 30 galaxies per arcmin2, beyond which the percent biases become larger than the statistical accuracy. This implies a flat-sky PCl analysis is accurate for current surveys but a Euclid-like survey will require higher accuracy.

  8. Causal role for inverse reasoning on obsessive-compulsive symptoms: Preliminary evidence from a cognitive bias modification for interpretation bias study.

    PubMed

    Wong, Shiu F; Grisham, Jessica R

    2017-12-01

    The inference-based approach (IBA) is a cognitive account of the genesis and maintenance of obsessive-compulsive disorder (OCD). According to the IBA, individuals with OCD are prone to using inverse reasoning, in which hypothetical causes form the basis of conclusions about reality. Several studies have provided preliminary support for an association between features of the IBA and OCD symptoms. However, there are currently no studies that have investigated the proposed causal relationship of inverse reasoning in OCD. In a non-clinical sample (N = 187), we used an interpretive cognitive bias procedure to train a bias towards using inverse reasoning (n = 64), healthy sensory-based reasoning (n = 65), or a control condition (n = 58). Participants were randomly allocated to these training conditions. This manipulation allowed us to assess whether, consistent with the IBA, inverse reasoning training increased compulsive-like behaviours and self-reported OCD symptoms. Results indicated that compared to a control condition, participants trained in inverse reasoning reported more OCD symptoms and were more avoidant of potentially contaminated objects. Moreover, change in inverse reasoning bias was a small but significant mediator of the relationship between training condition and behavioural avoidance. Conversely, training in a healthy (non-inverse) reasoning style did not have any effect on symptoms or behaviour relative to the control condition. As this study was conducted in a non-clinical sample, we were unable to generalise our findings to a clinical population. Findings generally support the IBA model by providing preliminary evidence of a causal role for inverse reasoning in OCD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Selection bias in dynamically measured supermassive black hole samples: its consequences and the quest for the most fundamental relation

    NASA Astrophysics Data System (ADS)

    Shankar, Francesco; Bernardi, Mariangela; Sheth, Ravi K.; Ferrarese, Laura; Graham, Alister W.; Savorgnan, Giulia; Allevato, Viola; Marconi, Alessandro; Läsker, Ronald; Lapi, Andrea

    2016-08-01

    We compare the set of local galaxies having dynamically measured black holes with a large, unbiased sample of galaxies extracted from the Sloan Digital Sky Survey. We confirm earlier work showing that the majority of black hole hosts have significantly higher velocity dispersions σ than local galaxies of similar stellar mass. We use Monte Carlo simulations to illustrate the effect on black hole scaling relations if this bias arises from the requirement that the black hole sphere of influence must be resolved to measure black hole masses with spatially resolved kinematics. We find that this selection effect artificially increases the normalization of the Mbh-σ relation by a factor of at least ˜3; the bias for the Mbh-Mstar relation is even larger. Our Monte Carlo simulations and analysis of the residuals from scaling relations both indicate that σ is more fundamental than Mstar or effective radius. In particular, the Mbh-Mstar relation is mostly a consequence of the Mbh-σ and σ-Mstar relations, and is heavily biased by up to a factor of 50 at small masses. This helps resolve the discrepancy between dynamically based black hole-galaxy scaling relations versus those of active galaxies. Our simulations also disfavour broad distributions of black hole masses at fixed σ. Correcting for this bias suggests that the calibration factor used to estimate black hole masses in active galaxies should be reduced to values of fvir ˜ 1. Black hole mass densities should also be proportionally smaller, perhaps implying significantly higher radiative efficiencies/black hole spins. Reducing black hole masses also reduces the gravitational wave signal expected from black hole mergers.

  10. Selection bias in dynamically-measured super-massive black hole samples: its consequences and the quest for the most fundamental relation

    NASA Astrophysics Data System (ADS)

    Shankar, Francesco; Bernardi, M.; Sheth, R. K.; Weinberg, D. H.; Miralda-Escudé, J.; Ferrarese, L.; Graham, A.; Sesana, A.; Lapi, A.; Marconi, A.; Allevato, V.; Savorgnan, G.; Laesker, R.

    2016-08-01

    We compare the set of local galaxies having dynamically measured black holes with a large, unbiased sample of galaxies extracted from the Sloan Digital Sky Survey. We confirm earlier work showing that the majority of black hole hosts have significantly higher velocity dispersions sigma than local galaxies of similar stellar mass. We use Monte-Carlo simulations to illustrate the effect on black hole scaling relations if this bias arises from the requirement that the black hole sphere of influence must be resolved to measure black hole masses with spatially resolved kinematics. We find that this selection effect artificially increases the normalization of the Mbh-sigma relation by a factor of at least ~3; the bias for the Mbh-Mstar relation is even larger. Our Monte Carlo simulations and analysis of the residuals from scaling relations both indicate that sigma is more fundamental than Mstar or effective radius. In particular, the Mbh-Mstar relation is mostly a consequence of the Mbh-sigma and sigma-Mstar relations, and is heavily biased by up to a factor of 50 at small masses. This helps resolve the discrepancy between dynamically-based black hole-galaxy scaling relations versus those of active galaxies. Our simulations also disfavour broad distributions of black hole masses at fixed sigma. Correcting for this bias suggests that the calibration factor used to estimate black hole masses in active galaxies should be reduced to values of fvir~1. Black hole mass densities should also be proportionally smaller, perhaps implying significantly higher radiative efficiencies/black hole spins. Reducing black hole masses also reduces the gravitational wave signal expected from black hole mergers.

  11. Reducing the number of reconstructions needed for estimating channelized observer performance

    NASA Astrophysics Data System (ADS)

    Pineda, Angel R.; Miedema, Hope; Brenner, Melissa; Altaf, Sana

    2018-03-01

    A challenge for task-based optimization is the time required for each reconstructed image in applications where reconstructions are time consuming. Our goal is to reduce the number of reconstructions needed to estimate the area under the receiver operating characteristic curve (AUC) of the infinitely-trained optimal channelized linear observer. We explore the use of classifiers which either do not invert the channel covariance matrix or do feature selection. We also study the assumption that multiple low contrast signals in the same image of a non-linear reconstruction do not significantly change the estimate of the AUC. We compared the AUC of several classifiers (Hotelling, logistic regression, logistic regression using Firth bias reduction and the least absolute shrinkage and selection operator (LASSO)) with a small number of observations both for normal simulated data and images from a total variation reconstruction in magnetic resonance imaging (MRI). We used 10 Laguerre-Gauss channels and the Mann-Whitney estimator for AUC. For this data, our results show that at small sample sizes feature selection using the LASSO technique can decrease bias of the AUC estimation with increased variance and that for large sample sizes the difference between these classifiers is small. We also compared the use of multiple signals in a single reconstructed image to reduce the number of reconstructions in a total variation reconstruction for accelerated imaging in MRI. We found that AUC estimation using multiple low contrast signals in the same image resulted in similar AUC estimates as doing a single reconstruction per signal leading to a 13x reduction in the number of reconstructions needed.

  12. The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?

    PubMed

    Ahern, James C M; Lee, Sang-Hee; Hawks, John D

    2002-09-01

    The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.

  13. FIP BIAS EVOLUTION IN A DECAYING ACTIVE REGION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, D.; Yardley, S. L.; Driel-Gesztelyi, L. van

    Solar coronal plasma composition is typically characterized by first ionization potential (FIP) bias. Using spectra obtained by Hinode’s EUV Imaging Spectrometer instrument, we present a series of large-scale, spatially resolved composition maps of active region (AR)11389. The composition maps show how FIP bias evolves within the decaying AR during the period 2012 January 4–6. Globally, FIP bias decreases throughout the AR. We analyzed areas of significant plasma composition changes within the decaying AR and found that small-scale evolution in the photospheric magnetic field is closely linked to the FIP bias evolution observed in the corona. During the AR’s decay phase,more » small bipoles emerging within supergranular cells reconnect with the pre-existing AR field, creating a pathway along which photospheric and coronal plasmas can mix. The mixing timescales are shorter than those of plasma enrichment processes. Eruptive activity also results in shifting the FIP bias closer to photospheric in the affected areas. Finally, the FIP bias still remains dominantly coronal only in a part of the AR’s high-flux density core. We conclude that in the decay phase of an AR’s lifetime, the FIP bias is becoming increasingly modulated by episodes of small-scale flux emergence, i.e., decreasing the AR’s overall FIP bias. Our results show that magnetic field evolution plays an important role in compositional changes during AR development, revealing a more complex relationship than expected from previous well-known Skylab results showing that FIP bias increases almost linearly with age in young ARs.« less

  14. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation.

    PubMed

    Carlson, Jordan A; Liu, Bo; Sallis, James F; Kerr, Jacqueline; Hipp, J Aaron; Staggs, Vincent S; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M

    2017-12-01

    Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82-0.98). Total MET-minutes were slightly underestimated by 9.3-17.1% and the ICCs were good (0.68-0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.

  15. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation

    PubMed Central

    Carlson, Jordan A.; Liu, Bo; Sallis, James F.; Kerr, Jacqueline; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M.

    2017-01-01

    Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82–0.98). Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings. PMID:29194358

  16. A theory of single-electron non-adiabatic tunneling through a small metal nanoparticle with due account of the strong interaction of valence electrons with phonons of the condensed matter environment.

    PubMed

    Medvedev, Igor G

    2011-11-07

    A theory of electrochemical behavior of small metal nanoparticles (NPs) which is governed both by the charging effect and the effect of the solvent reorganization on the dynamic of the electron transfer (ET) is considered under ambient conditions. The exact expression for the rate constant of ET from an electrode to NP which is valid for all values of the reorganization free energy E(r), bias voltage, and overpotential is obtained in the non-adiabatic limit. The tunnel current/overpotential relations are studied and calculated for different values of the bias voltage and E(r). The effect of E(r) on the full width at half maximum of the charging peaks is investigated at different values of the bias voltage. The differential conductance/bias voltage and the tunnel current/bias voltage dependencies are also studied and calculated. It is shown that, at room temperature, the pronounced Coulomb blockade oscillations in the differential conductance/bias voltage curves and the noticeable Coulomb staircase in the tunnel current/bias voltage relations are observed only at rather small values of E(r) in the case of the strongly asymmetric tunneling contacts.

  17. Real-time image annotation by manifold-based biased Fisher discriminant analysis

    NASA Astrophysics Data System (ADS)

    Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming

    2008-01-01

    Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.

  18. Library Construction from Subnanogram DNA for Pelagic Sea Water and Deep-Sea Sediments

    PubMed Central

    Hirai, Miho; Nishi, Shinro; Tsuda, Miwako; Sunamura, Michinari; Takaki, Yoshihiro; Nunoura, Takuro

    2017-01-01

    Shotgun metagenomics is a low biased technology for assessing environmental microbial diversity and function. However, the requirement for a sufficient amount of DNA and the contamination of inhibitors in environmental DNA leads to difficulties in constructing a shotgun metagenomic library. We herein examined metagenomic library construction from subnanogram amounts of input environmental DNA from subarctic surface water and deep-sea sediments using two library construction kits: the KAPA Hyper Prep Kit and Nextera XT DNA Library Preparation Kit, with several modifications. The influence of chemical contaminants associated with these environmental DNA samples on library construction was also investigated. Overall, shotgun metagenomic libraries were constructed from 1 pg to 1 ng of input DNA using both kits without harsh library microbial contamination. However, the libraries constructed from 1 pg of input DNA exhibited larger biases in GC contents, k-mers, or small subunit (SSU) rRNA gene compositions than those constructed from 10 pg to 1 ng DNA. The lower limit of input DNA for low biased library construction in this study was 10 pg. Moreover, we revealed that technology-dependent biases (physical fragmentation and linker ligation vs. tagmentation) were larger than those due to the amount of input DNA. PMID:29187708

  19. Exchange bias effect in Au-Fe 3O 4 dumbbell nanoparticles induced by the charge transfer from gold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feygenson, Mikhail; Bauer, John C; Gai, Zheng

    2015-08-10

    We have studied the origin of the exchange bias effect in the Au-Fe 3O 4 dumbbell nanoparticles in two samples with different sizes of the Au seed nanoparticles (4.1 and 2.7 nm) and same size of Fe 3O 4 nanoparticles (9.8 nm). The magnetization, small-angle neutron scattering, synchrotron x-ray diffraction and scanning transmission electron microscope measurements determined the antiferromagnetic FeO wüstite phase within Fe 3O 4 nanoparticles, originating at the interface with the Au nanoparticles. The interface between antiferromagnetic FeO and ferrimagnetic Fe 3O 4 is giving rise to the exchange bias effect. The strength of the exchange bias fieldsmore » depends on the interfacial area and lattice mismatch between both phases. We propose that the charge transfer from the Au nanoparticles is responsible for a partial reduction of the Fe 3O 4 into FeO phase at the interface with Au nanoparticles. The Au-O bonds are formed across the interface to accommodate an excess of oxygen released during the reduction of magnetite.« less

  20. Exchange bias effect in Au-Fe3O4 dumbbell nanoparticles induced by the charge transfer from gold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feygenson, Mikhail; Bauer, John C.; Gai, Zheng

    2015-08-10

    We have studied the origin of the exchange bias effect in the Au-Fe3O4 dumbbell nanoparticles in two samples with different sizes of the Au seed nanoparticles (4.1 and 2.7 nm) and same size of Fe3O4 nanoparticles (9.8 nm). The magnetization, small-angle neutron-scattering, synchrotron x-ray diffraction, and scanning transmission electron microscope measurements determined the antiferromagnetic FeO wustite phase within Fe3O4 nanoparticles, originating at the interface with the Au nanoparticles. The interface between antiferromagnetic FeO and ferrimagnetic Fe3O4 is giving rise to the exchange bias effect. The strength of the exchange bias fields depends on the interfacial area and lattice mismatch betweenmore » both phases. We propose that the charge transfer from the Au nanoparticles is responsible for a partial reduction of the Fe3O4 into the FeO phase at the interface with Au nanoparticles. The Au-O bonds are formed, presumably across the interface to accommodate an excess of oxygen released during the reduction of magnetite« less

  1. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.

  2. Extent of genome-wide linkage disequilibrium in Australian Holstein-Friesian cattle based on a high-density SNP panel.

    PubMed

    Khatkar, Mehar S; Nicholas, Frank W; Collins, Andrew R; Zenger, Kyall R; Cavanagh, Julie A L; Barris, Wes; Schnabel, Robert D; Taylor, Jeremy F; Raadsma, Herman W

    2008-04-24

    The extent of linkage disequilibrium (LD) within a population determines the number of markers that will be required for successful association mapping and marker-assisted selection. Most studies on LD in cattle reported to date are based on microsatellite markers or small numbers of single nucleotide polymorphisms (SNPs) covering one or only a few chromosomes. This is the first comprehensive study on the extent of LD in cattle by analyzing data on 1,546 Holstein-Friesian bulls genotyped for 15,036 SNP markers covering all regions of all autosomes. Furthermore, most studies in cattle have used relatively small sample sizes and, consequently, may have had biased estimates of measures commonly used to describe LD. We examine minimum sample sizes required to estimate LD without bias and loss in accuracy. Finally, relatively little information is available on comparative LD structures including other mammalian species such as human and mouse, and we compare LD structure in cattle with public-domain data from both human and mouse. We computed three LD estimates, D', Dvol and r2, for 1,566,890 syntenic SNP pairs and a sample of 365,400 non-syntenic pairs. Mean D' is 0.189 among syntenic SNPs, and 0.105 among non-syntenic SNPs; mean r2 is 0.024 among syntenic SNPs and 0.0032 among non-syntenic SNPs. All three measures of LD for syntenic pairs decline with distance; the decline is much steeper for r2 than for D' and Dvol. The value of D' and Dvol are quite similar. Significant LD in cattle extends to 40 kb (when estimated as r2) and 8.2 Mb (when estimated as D'). The mean values for LD at large physical distances are close to those for non-syntenic SNPs. Minor allelic frequency threshold affects the distribution and extent of LD. For unbiased and accurate estimates of LD across marker intervals spanning < 1 kb to > 50 Mb, minimum sample sizes of 400 (for D') and 75 (for r2) are required. The bias due to small samples sizes increases with inter-marker interval. LD in cattle is much less extensive than in a mouse population created from crossing inbred lines, and more extensive than in humans. For association mapping in Holstein-Friesian cattle, for a given design, at least one SNP is required for each 40 kb, giving a total requirement of at least 75,000 SNPs for a low power whole-genome scan (median r2 > 0.19) and up to 300,000 markers at 10 kb intervals for a high power genome scan (median r2 > 0.62). For estimation of LD by D' and Dvol with sufficient precision, a sample size of at least 400 is required, whereas for r2 a minimum sample of 75 is adequate.

  3. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method

    PubMed Central

    2013-01-01

    Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158

  4. Auroral E-region Plasma Irregularities and their Control by the Plasma Convection in the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Forsythe, V. V.; Makarevich, R. A.

    2016-12-01

    Small-scale ionospheric plasma irregularities in the high-latitude E region and their control by F-region plasma convection are investigated using Super Dual Auroral Network (SuperDARN) observations at high southern latitudes over a 1-year period. Significant asymmetries are found in the velocity occurrence distribution due to the clustering of the high-velocity echoes of a particular velocity polarity. Statistical analysis of convection showed that some radars observe predominantly negative bias in the convection component within their short, E-region ranges, while others have a predominantly positive bias. A hypothesis that this bias is caused by asymmetric sectoring of the high-latitude plasma convection pattern is investigated. A new algorithm is developed that samples the plasma convection map and evaluates the convection pattern asymmetry along the particular latitude that corresponds to the radar location. It is demonstrated that the convection asymmetry has a particular seasonal and diurnal pattern, which is different for the polar and auroral radars. Possible causes for the observed convection pattern asymmetry are discussed. It is further proposed that the statistical occurrence of high-velocity E-region echoes generated by the Farley-Buneman instability (FBI) is highly sensitive to small changes in the convection pattern, which is consistent with the electric field threshold for the FBI onset being perhaps sharper and lower than previously thought.

  5. Getting closer to the truth: overcoming research challenges when estimating the financial impact of worksite health promotion programs.

    PubMed

    Ozminkowski, R J; Goetzel, R Z

    2001-01-01

    The authors describe the most important methodological challenges often encountered in conducting research and evaluation on the financial impact of health promotion. These include selection bias, skewed data, small sample size, metrics. They discuss when these problems can and cannot be overcome and suggest how some of these problems can be overcome through a creating an appropriate framework for the study, and using state of the art statistical methods.

  6. Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.

    PubMed

    Galvelis, Raimondas; Sugita, Yuji

    2017-06-13

    The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.

  7. Evaluating gull diets: A comparison of conventional methods and stable isotope analysis

    USGS Publications Warehouse

    Weiser, Emily L.; Powell, Abby N.

    2011-01-01

    Samples such as regurgitated pellets and food remains have traditionally been used in studies of bird diets, but these can produce biased estimates depending on the digestibility of different foods. Stable isotope analysis has been developed as a method for assessing bird diets that is not biased by digestibility. These two methods may provide complementary or conflicting information on diets of birds, but are rarely compared directly. We analyzed carbon and nitrogen stable isotope ratios of feathers of Glaucous Gull (Larus hyperboreus) chicks from eight breeding colonies in northern Alaska, and used a Bayesian mixing model to generate a probability distribution for the contribution of each food group to diets. We compared these model results with probability distributions from conventional diet samples (pellets and food remains) from the same colonies and time periods. Relative to the stable isotope estimates, conventional analysis often overestimated the contributions of birds and small mammals to gull diets and often underestimated the contributions of fish and zooplankton. Both methods gave similar estimates for the contributions of scavenged caribou, miscellaneous marine foods, and garbage to diets. Pellets and food remains therefore may be useful for assessing the importance of garbage relative to certain other foods in diets of gulls and similar birds, but are clearly inappropriate for estimating the potential impact of gulls on birds, small mammals, or fish. However, conventional samples provide more species-level information than stable isotope analysis, so a combined approach would be most useful for diet analysis and assessing a predator's impact on particular prey groups.

  8. Avoiding treatment bias of REDD+ monitoring by sampling with partial replacement.

    PubMed

    Köhl, Michael; Scott, Charles T; Lister, Andrew J; Demon, Inez; Plugge, Daniel

    2015-12-01

    Implementing REDD+ renders the development of a measurement, reporting and verification (MRV) system necessary to monitor carbon stock changes. MRV systems generally apply a combination of remote sensing techniques and in-situ field assessments. In-situ assessments can be based on 1) permanent plots, which are assessed on all successive occasions, 2) temporary plots, which are assessed only once, and 3) a combination of both. The current study focuses on in-situ assessments and addresses the effect of treatment bias, which is introduced by managing permanent sampling plots differently than the surrounding forests. Temporary plots are not subject to treatment bias, but are associated with large sampling errors and low cost-efficiency. Sampling with partial replacement (SPR) utilizes both permanent and temporary plots. We apply a scenario analysis with different intensities of deforestation and forest degradation to show that SPR combines cost-efficiency with the handling of treatment bias. Without treatment bias permanent plots generally provide lower sampling errors for change estimates than SPR and temporary plots, but do not provide reliable estimates, if treatment bias occurs, SPR allows for change estimates that are comparable to those provided by permanent plots, offers the flexibility to adjust sample sizes in the course of time, and allows to compare data on permanent versus temporary plots for detecting treatment bias. Equivalence of biomass or carbon stock estimates between permanent and temporary plots serves as an indication for the absence of treatment bias while differences suggest that there is evidence for treatment bias. SPR is a flexible tool for estimating emission factors from successive measurements. It does not entirely depend on sample plots that are installed at the first occasion but allows for the adjustment of sample sizes and placement of new plots at any occasion. This ensures that in-situ samples provide representative estimates over time. SPR offers the possibility to increase sampling intensity in areas with high degradation intensities or to establish new plots in areas where permanent plots are lost due to deforestation. SPR is also an ideal approach to mitigate concerns about treatment bias.

  9. Characterizing sampling and quality screening biases in infrared and microwave limb sounding

    NASA Astrophysics Data System (ADS)

    Millán, Luis F.; Livesey, Nathaniel J.; Santee, Michelle L.; von Clarmann, Thomas

    2018-03-01

    This study investigates orbital sampling biases and evaluates the additional impact caused by data quality screening for the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) and the Aura Microwave Limb Sounder (MLS). MIPAS acts as a proxy for typical infrared limb emission sounders, while MLS acts as a proxy for microwave limb sounders. These biases were calculated for temperature and several trace gases by interpolating model fields to real sampling patterns and, additionally, screening those locations as directed by their corresponding quality criteria. Both instruments have dense uniform sampling patterns typical of limb emission sounders, producing almost identical sampling biases. However, there is a substantial difference between the number of locations discarded. MIPAS, as a mid-infrared instrument, is very sensitive to clouds, and measurements affected by them are thus rejected from the analysis. For example, in the tropics, the MIPAS yield is strongly affected by clouds, while MLS is mostly unaffected. The results show that upper-tropospheric sampling biases in zonally averaged data, for both instruments, can be up to 10 to 30 %, depending on the species, and up to 3 K for temperature. For MIPAS, the sampling reduction due to quality screening worsens the biases, leading to values as large as 30 to 100 % for the trace gases and expanding the 3 K bias region for temperature. This type of sampling bias is largely induced by the geophysical origins of the screening (e.g. clouds). Further, analysis of long-term time series reveals that these additional quality screening biases may affect the ability to accurately detect upper-tropospheric long-term changes using such data. In contrast, MLS data quality screening removes sufficiently few points that no additional bias is introduced, although its penetration is limited to the upper troposphere, while MIPAS may cover well into the mid-troposphere in cloud-free scenarios. We emphasize that the results of this study refer only to the representativeness of the respective data, not to their intrinsic quality.

  10. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  11. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  12. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    PubMed Central

    Brownell, Sara E.; Kloser, Matthew J.; Fukami, Tadashi; Shavelson, Richard J.

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course. PMID:24358380

  13. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    PubMed

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  14. Bias estimation for moving optical sensor measurements with targets of opportunity

    NASA Astrophysics Data System (ADS)

    Belfadel, Djedjiga; Osborne, Richard W.; Bar-Shalom, Yaakov

    2014-06-01

    Integration of space based sensors into a Ballistic Missile Defense System (BMDS) allows for detection and tracking of threats over a larger area than ground based sensors [1]. This paper examines the effect of sensor bias error on the tracking quality of a Space Tracking and Surveillance System (STSS) for the highly non-linear problem of tracking a ballistic missile. The STSS constellation consists of two or more satellites (on known trajectories) for tracking ballistic targets. Each satellite is equipped with an IR sensor that provides azimuth and elevation to the target. The tracking problem is made more difficult due to a constant or slowly varying bias error present in each sensor's line of sight measurements. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. The measurements provided by these sensors are assumed time-coincident (synchronous) and perfectly associated. The line of sight (LOS) measurements from the sensors can be fused into measurements which are the Cartesian target position, i.e., linear in the target state. We evaluate the Cramér-Rao Lower Bound (CRLB) on the covariance of the bias estimates, which serves as a quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the (unknown) trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.

  15. Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample

    ERIC Educational Resources Information Center

    Kangas, Brian D.; Branch, Marc N.

    2008-01-01

    The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…

  16. Method and apparatus for differential spectroscopic atomic-imaging using scanning tunneling microscopy

    DOEpatents

    Kazmerski, Lawrence L.

    1990-01-01

    A Method and apparatus for differential spectroscopic atomic-imaging is disclosed for spatial resolution and imaging for display not only individual atoms on a sample surface, but also bonding and the specific atomic species in such bond. The apparatus includes a scanning tunneling microscope (STM) that is modified to include photon biasing, preferably a tuneable laser, modulating electronic surface biasing for the sample, and temperature biasing, preferably a vibration-free refrigerated sample mounting stage. Computer control and data processing and visual display components are also included. The method includes modulating the electronic bias voltage with and without selected photon wavelengths and frequency biasing under a stabilizing (usually cold) bias temperature to detect bonding and specific atomic species in the bonds as the STM rasters the sample. This data is processed along with atomic spatial topography data obtained from the STM raster scan to create a real-time visual image of the atoms on the sample surface.

  17. Bias of apparent tracer ages in heterogeneous environments.

    PubMed

    McCallum, James L; Cook, Peter G; Simmons, Craig T; Werner, Adrian D

    2014-01-01

    The interpretation of apparent ages often assumes that a water sample is composed of a single age. In heterogeneous aquifers, apparent ages estimated with environmental tracer methods do not reflect mean water ages because of the mixing of waters from many flow paths with different ages. This is due to nonlinear variations in atmospheric concentrations of the tracer with time resulting in biases of mixed concentrations used to determine apparent ages. The bias of these methods is rarely reported and has not been systematically evaluated in heterogeneous settings. We simulate residence time distributions (RTDs) and environmental tracers CFCs, SF6 , (85) Kr, and (39) Ar in synthetic heterogeneous confined aquifers and compare apparent ages to mean ages. Heterogeneity was simulated as both K-field variance (σ(2) ) and structure. We demonstrate that an increase in heterogeneity (increase in σ(2) or structure) results in an increase in the width of the RTD. In low heterogeneity cases, widths were generally on the order of 10 years and biases generally less than 10%. In high heterogeneity cases, widths can reach 100 s of years and biases can reach up to 100%. In cases where the temporal variations of atmospheric concentration of individual tracers vary, different patterns of bias are observed for the same mean age. We show that CFC-12 and CFC-113 ages may be used to correct for the mean age if analytical errors are small. © 2013, National Ground Water Association.

  18. DNA extraction for streamlined metagenomics of diverse environmental samples.

    PubMed

    Marotz, Clarisse; Amir, Amnon; Humphrey, Greg; Gaffney, James; Gogul, Grant; Knight, Rob

    2017-06-01

    A major bottleneck for metagenomic sequencing is rapid and efficient DNA extraction. Here, we compare the extraction efficiencies of three magnetic bead-based platforms (KingFisher, epMotion, and Tecan) to a standardized column-based extraction platform across a variety of sample types, including feces, oral, skin, soil, and water. Replicate sample plates were extracted and prepared for 16S rRNA gene amplicon sequencing in parallel to assess extraction bias and DNA quality. The data demonstrate that any effect of extraction method on sequencing results was small compared with the variability across samples; however, the KingFisher platform produced the largest number of high-quality reads in the shortest amount of time. Based on these results, we have identified an extraction pipeline that dramatically reduces sample processing time without sacrificing bacterial taxonomic or abundance information.

  19. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  20. Apparatus for measuring surface particulate contamination

    DOEpatents

    Woodmansee, Donald E.

    2002-01-01

    An apparatus for measuring surface particulate contamination includes a tool for collecting a contamination sample from a target surface, a mask having an opening of known area formed therein for defining the target surface, and a flexible connector connecting the tool to the mask. The tool includes a body portion having a large diameter section defining a surface and a small diameter section extending from the large diameter section. A particulate collector is removably mounted on the surface of the large diameter section for collecting the contaminants. The tool further includes a spindle extending from the small diameter section and a spool slidingly mounted on the spindle. A spring is disposed between the small diameter section and the spool for biasing the spool away from the small diameter section. An indicator is provided on the spindle so as to be revealed when the spool is pressed downward to compress the spring.

  1. Dried blood spot analysis for therapeutic drug monitoring of pazopanib.

    PubMed

    de Wit, Djoeke; den Hartigh, Jan; Gelderblom, Hans; Qian, Yanwen; den Hollander, Margret; Verheul, Henk; Guchelaar, Henk-Jan; van Erp, Nielka P

    2015-12-01

    Dried blood spot (DBS) sampling is potentially a more patient-friendly and flexible alternative to venous sampling of pazopanib. This study determined the agreement between pazopanib DBS and plasma concentrations to facilitate implementation of pazopanib DBS sampling into clinical practice. Paired DBS and plasma samples were collected in 12 patients. Pazopanib plasma concentrations were calculated from DBS concentrations using the formula: plasma concentration = DBSconcentration /(1 - hematocrit). Passing-Bablok and Bland-Altman analyses were used to determine the agreement between calculated and measured plasma concentrations. We predefined a clinical acceptance limit of 25% for the Bland-Altman analysis. Passing-Bablok analysis showed a small constant (intercept estimate, -8.53 [95%CI, -12.22 to -4.41]) and slightly proportional (slope estimate, 1.15 [95%CI, 1.04-1.24]) bias between calculated and measured concentrations. This bias was clinically nonrelevant, as shown by Bland-Altman analysis; the mean ratio of calculated to measured concentrations was 0.94 (95%CI, 0.65-1.23). The clinical acceptance limits were well within these 95% limits of agreement. More specifically, 92.6% of the data points were within the predefined acceptance limits. Pazopanib plasma concentrations can be accurately calculated from DBS concentrations. Although validation of DBS cards prepared by patients themselves is required, these results show that DBS sampling can be used to monitor pazopanib therapy in clinical practice. © 2015, The American College of Clinical Pharmacology.

  2. Velocity segregation and systematic biases in velocity dispersion estimates with the SPT-GMOS spectroscopic survey

    DOE PAGES

    Bayliss, Matthew. B.; Zengo, Kyle; Ruel, Jonathan; ...

    2017-03-07

    The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel'dovich (SZ) selected galaxy clusters spanningmore » $ 0.28 < z < 1.08$. Our sample is primarily draw from the SPT-GMOS spectroscopic survey, supplemented by additional published spectroscopy, resulting in a final spectroscopic sample of 4148 galaxy spectra---2868 cluster members. The velocity dispersion of star-forming cluster galaxies is $$17\\pm4$$% greater than that of passive cluster galaxies, and the velocity dispersion of bright ($$m < m^{*}-0.5$$) cluster galaxies is $$11\\pm4$$% lower than the velocity dispersion of our total member population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive vs. star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations in which suggests that our dispersions are systematically low by as much as 3\\% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Here, by measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.« less

  3. Velocity segregation and systematic biases in velocity dispersion estimates with the SPT-GMOS spectroscopic survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayliss, Matthew. B.; Zengo, Kyle; Ruel, Jonathan

    The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel'dovich (SZ) selected galaxy clusters spanningmore » $ 0.28 < z < 1.08$. Our sample is primarily draw from the SPT-GMOS spectroscopic survey, supplemented by additional published spectroscopy, resulting in a final spectroscopic sample of 4148 galaxy spectra---2868 cluster members. The velocity dispersion of star-forming cluster galaxies is $$17\\pm4$$% greater than that of passive cluster galaxies, and the velocity dispersion of bright ($$m < m^{*}-0.5$$) cluster galaxies is $$11\\pm4$$% lower than the velocity dispersion of our total member population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive vs. star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations in which suggests that our dispersions are systematically low by as much as 3\\% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Here, by measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.« less

  4. Velocity Segregation and Systematic Biases In Velocity Dispersion Estimates with the SPT-GMOS Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Bayliss, Matthew. B.; Zengo, Kyle; Ruel, Jonathan; Benson, Bradford A.; Bleem, Lindsey E.; Bocquet, Sebastian; Bulbul, Esra; Brodwin, Mark; Capasso, Raffaella; Chiu, I.-non; McDonald, Michael; Rapetti, David; Saro, Alex; Stalder, Brian; Stark, Antony A.; Strazzullo, Veronica; Stubbs, Christopher W.; Zenteno, Alfredo

    2017-03-01

    The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel’dovich (SZ) selected galaxy clusters spanning 0.28< z< 1.08. Our sample is primarily draw from the SPT-GMOS spectroscopic survey, supplemented by additional published spectroscopy, resulting in a final spectroscopic sample of 4148 galaxy spectra—2868 cluster members. The velocity dispersion of star-forming cluster galaxies is 17 ± 4% greater than that of passive cluster galaxies, and the velocity dispersion of bright (m< {m}* -0.5) cluster galaxies is 11 ± 4% lower than the velocity dispersion of our total member population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive versus star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations, which suggests that our dispersions are systematically low by as much as 3% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.

  5. Systematic lymphadenectomy versus sampling of ipsilateral mediastinal lymph-nodes during lobectomy for non-small-cell lung cancer: a systematic review of randomized trials and a meta-analysis.

    PubMed

    Mokhles, Sahar; Macbeth, Fergus; Treasure, Tom; Younes, Riad N; Rintoul, Robert C; Fiorentino, Francesca; Bogers, Ad J J C; Takkenberg, Johanna J M

    2017-06-01

    To re-examine the evidence for recommendations for complete dissection versus sampling of ipsilateral mediastinal lymph nodes during lobectomy for cancer. We searched for randomized trials of systematic mediastinal lymphadenectomy versus mediastinal sampling. We performed a textual analysis of the authors' own starting assumptions and conclusion. We analysed the trial designs and risk of bias. We extracted data on early mortality, perioperative complications, overall survival, local recurrence and distant recurrence for meta-analysis. We found five randomized controlled trials recruiting 1980 patients spanning 1989-2007. The expressed starting position in 3/5 studies was a conviction that systematic dissection was effective. Long-term survival was better with lymphadenectomy compared with sampling (Hazard Ratio 0.78; 95% CI 0.69-0.89) as was perioperative survival (Odds Ratio 0.59; 95% CI 0.25-1.36, non-significant). But there was an overall high risk of bias and a lack of intention to treat analysis. There were higher rates (non-significant) of perioperative complications including bleeding, chylothorax and recurrent nerve palsy with lymphadenectomy. The high risk of bias in these trials makes the overall conclusion insecure. The finding of clinically important surgically related morbidities but lower perioperative mortality with lymphadenectomy seems inconsistent. The multiple variables in patients, cancers and available treatments suggest that large pragmatic multicentre trials, testing currently available strategies, are the best way to find out which are more effective. The number of patients affected with lung cancer makes trials feasible. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  6. Average of delta: a new quality control tool for clinical laboratories.

    PubMed

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  7. Identification and Correction of Additive and Multiplicative Spatial Biases in Experimental High-Throughput Screening.

    PubMed

    Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir

    2018-06-01

    Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.

  8. Effects of small-scale vertical variations in well-screen inflow rates and concentrations of organic compounds on the collection of representative ground-water-quality samples

    USGS Publications Warehouse

    Gibs, Jacob; Brown, G. Allan; Turner, Kenneth S.; MacLeod, Cecilia L.; Jelinski, James; Koehnlein, Susan A.

    1993-01-01

    Because a water sample collected from a well is an integration of water from different depths along the well screen, measured concentrations can be biased if analyte concentrations are not uniform along the length of the well screen. The resulting concentration in the sample, therefore, is a function of variations in well-screen inflow rate and analyte concentration with depth. A multiport sampler with seven short screened intervals was designed and used to investigate small-scale vertical variations in water chemistry and aquifer hydraulic conductivity in ground water contaminated by leaded gasoline at Galloway Township, Atlantic County, New Jersey. The multiport samplers were used to collect independent samples from seven intervals within the screened zone that were flow-rate weighted and integrated to simulate a 5-foot-long, 2.375-inch- outside-diameter conventional wire-wound screen. The integration of the results of analyses of samples collected from two multiport samplers showed that a conventional 5-foot-long well screen would integrate contaminant concentrations over its length and resulted in an apparent contaminant concentration that was a little as 28 percent of the maximum concentration observed in the multiport sampler.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Christopher A.; Martinez, Alonzo; McNamara, Bruce K.

    International Atom Energy Agency (IAEA) safeguard verification measures in gaseous centrifuge enrichment plants (GCEPs) rely on environmental sampling, non-destructive assay (NDA), and destructive assay (DA) sampling and analysis to determine uranium enrichment. UF6 bias defect measurements are made by DA sampling and analysis to assure that enrichment is consistent with declarations. DA samples are collected from a limited number of cylinders for high precision, offsite mass spectrometer analysis. Samples are typically drawn from a sampling tap into a UF6 sample bottle, then packaged, sealed, and shipped under IAEA chain of custody to an offsite analytical laboratory. Future DA safeguard measuresmore » may require improvements in efficiency and effectiveness as GCEP capacities increase and UF6 shipping regulations become increasingly more restrictive. The Pacific Northwest National Laboratory (PNNL) DA sampler concept and Laser Ablation Absorption Ratio Spectrometry (LAARS) assay method are under development to potentially provide DA safeguard tools that increase inspection effectiveness and reduce sample shipping constraints. The PNNL DA sampler concept uses a handheld sampler to collect DA samples for either onsite LAARS assay or offsite laboratory analysis. The DA sampler design will use a small sampling planchet that is coated with an adsorptive film to collect controlled quantities of UF6 gas directly from a cylinder or process sampling tap. Development efforts are currently underway at PNNL to enhance LAARS assay performance to allow high-precision onsite bias defect measurements. In this paper, we report on the experimental investigation to develop adsorptive films for the PNNL DA sampler concept. These films are intended to efficiently capture UF6 and then stabilize the collected DA sample prior to onsite LAARS or offsite laboratory analysis. Several porous material composite films were investigated, including a film designed to maximize the chemical adsorption and binding of gaseous UF6 onto the sampling planchet.« less

  10. Testing the Large-scale Environments of Cool-core and Non-cool-core Clusters with Clustering Bias

    NASA Astrophysics Data System (ADS)

    Medezinski, Elinor; Battaglia, Nicholas; Coupon, Jean; Cen, Renyue; Gaspari, Massimo; Strauss, Michael A.; Spergel, David N.

    2017-02-01

    There are well-observed differences between cool-core (CC) and non-cool-core (NCC) clusters, but the origin of this distinction is still largely unknown. Competing theories can be divided into internal (inside-out), in which internal physical processes transform or maintain the NCC phase, and external (outside-in), in which the cluster type is determined by its initial conditions, which in turn leads to different formation histories (I.e., assembly bias). We propose a new method that uses the relative assembly bias of CC to NCC clusters, as determined via the two-point cluster-galaxy cross-correlation function (CCF), to test whether formation history plays a role in determining their nature. We apply our method to 48 ACCEPT clusters, which have well resolved central entropies, and cross-correlate with the SDSS-III/BOSS LOWZ galaxy catalog. We find that the relative bias of NCC over CC clusters is b = 1.42 ± 0.35 (1.6σ different from unity). Our measurement is limited by the small number of clusters with core entropy information within the BOSS footprint, 14 CC and 34 NCC clusters. Future compilations of X-ray cluster samples, combined with deep all-sky redshift surveys, will be able to better constrain the relative assembly bias of CC and NCC clusters and determine the origin of the bimodality.

  11. Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias

    PubMed Central

    Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro

    2011-01-01

    In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029

  12. Conditional estimation using prior information in 2-stage group sequential designs assuming asymptotic normality when the trial terminated early.

    PubMed

    Shimura, Masashi; Maruo, Kazushi; Gosho, Masahiko

    2018-04-23

    Two-stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean-adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean-squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Testing the Large-scale Environments of Cool-core and Non-cool-core Clusters with Clustering Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medezinski, Elinor; Battaglia, Nicholas; Cen, Renyue

    2017-02-10

    There are well-observed differences between cool-core (CC) and non-cool-core (NCC) clusters, but the origin of this distinction is still largely unknown. Competing theories can be divided into internal (inside-out), in which internal physical processes transform or maintain the NCC phase, and external (outside-in), in which the cluster type is determined by its initial conditions, which in turn leads to different formation histories (i.e., assembly bias). We propose a new method that uses the relative assembly bias of CC to NCC clusters, as determined via the two-point cluster-galaxy cross-correlation function (CCF), to test whether formation history plays a role in determiningmore » their nature. We apply our method to 48 ACCEPT clusters, which have well resolved central entropies, and cross-correlate with the SDSS-III/BOSS LOWZ galaxy catalog. We find that the relative bias of NCC over CC clusters is b = 1.42 ± 0.35 (1.6 σ different from unity). Our measurement is limited by the small number of clusters with core entropy information within the BOSS footprint, 14 CC and 34 NCC clusters. Future compilations of X-ray cluster samples, combined with deep all-sky redshift surveys, will be able to better constrain the relative assembly bias of CC and NCC clusters and determine the origin of the bimodality.« less

  14. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.

  15. Pharmacologic Treatment of Repetitive Behaviors in Autism Spectrum Disorders: Evidence of Publication Bias

    PubMed Central

    Volkmar, Fred R.; Bloch, Michael H.

    2012-01-01

    OBJECTIVE: The goal of this study was to examine the efficacy of serotonin receptor inhibitors (SRIs) for the treatment of repetitive behaviors in autism spectrum disorders (ASD). METHODS: Two reviewers searched PubMed and Clinicaltrials.gov for randomized, double-blind, placebo-controlled trials evaluating the efficacy of SRIs for repetitive behaviors in ASD. Our primary outcome was mean improvement in ratings scales of repetitive behavior. Publication bias was assessed by using a funnel plot, the Egger’s test, and a meta-regression of sample size and effect size. RESULTS: Our search identified 5 published and 5 unpublished but completed trials eligible for meta-analysis. Meta-analysis of 5 published and 1 unpublished trial (which provided data) demonstrated a small but significant effect of SRI for the treatment of repetitive behaviors in ASD (standardized mean difference: 0.22 [95% confidence interval: 0.07–0.37], z score = 2.87, P < .005). There was significant evidence of publication bias in all analyses. When Duval and Tweedie's trim and fill method was used to adjust for the effect of publication bias, there was no longer a significant benefit of SRI for the treatment of repetitive behaviors in ASD (standardized mean difference: 0.12 [95% confidence interval: –0.02 to 0.27]). Secondary analyses demonstrated no significant effect of type of medication, patient age, method of analysis, trial design, or trial duration on reported SRI efficacy. CONCLUSIONS: Meta-analysis of the published literature suggests a small but significant effect of SRI in the treatment of repetitive behaviors in ASD. This effect may be attributable to selective publication of trial results. Without timely, transparent, and complete disclosure of trial results, it remains difficult to determine the efficacy of available medications. PMID:22529279

  16. Bias-field equalizer for bubble memories

    NASA Technical Reports Server (NTRS)

    Keefe, G. E.

    1977-01-01

    Magnetoresistive Perm-alloy sensor monitors bias field required to maintain bubble memory. Sensor provides error signal that, in turn, corrects magnitude of bias field. Error signal from sensor can be used to control magnitude of bias field in either auxiliary set of bias-field coils around permanent magnet field, or current in small coils used to remagnetize permanent magnet by infrequent, short, high-current pulse or short sequence of pulses.

  17. Direct analysis of samples under ambient condition by high-voltage-assisted laser desorption ionization mass spectrometry in both positive and negative ion mode.

    PubMed

    Ren, Xinxin; Liu, Jia; Zhang, Chengsen; Luo, Hai

    2013-03-15

    With the rapid development of ambient mass spectrometry, the hybrid laser-based ambient ionization methods which can generate multiply charged ions of large biomolecules and also characterize small molecules with good signal-to-noise in both positive and negative ion modes are of particular interest. An ambient ionization method termed high-voltage-assisted laser desorption ionization (HALDI) is developed, in which a 1064 nm laser is used to desorb various liquid samples from the sample target biased at a high potential without the need for an organic matrix. The pre-charged liquid samples are desorbed by the laser to form small charged droplets which may undergo an electrospray-like ionization process to produce multiply charged ions of large biomolecules. Various samples including proteins, oligonucleotides (ODNs), drugs, whole milk and chicken eggs have been analyzed by HALDI-MS in both positive and negative ion mode with little or no sample preparation. In addition, HALDI can generate intense signals with better signal-to-noise in negative ion mode than laser desorption spay post-ionization (LDSPI) from the same samples, such as ODNs and some carboxylic-group-containing small drug molecules. HALDI-MS can directly analyze a variety of liquid samples including proteins, ODNs, pharmaceuticals and biological fluids in both positive and negative ion mode without the use of an organic matrix. This technique may be further developed into a useful tool for rapid analysis in many different fields such as pharmaceutical, food, and biological sciences. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  19. A Wide Area Bipolar Cascade Resonant Cavity Light Emitting Diode for a Hybrid Range-Intensity Sensor

    DTIC Science & Technology

    2008-06-19

    forward bias voltage and a small amount of current flow- ing due to a forward biased tunnel current. (d) shows a drop in the forward bias current due...flowing due to a forward biased tunnel current. (d) shows a drop in the forward bias current due to the widening of the forbidden band, and fewer...3-14 xii Figure Page 3.10. Energy bands of a tunnel junction at various bias levels. (a) shows the junction under reverse bias with holes in

  20. Linking Well-Tempered Metadynamics Simulations with Experiments

    PubMed Central

    Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele

    2010-01-01

    Abstract Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. PMID:20441734

  1. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  2. An Experimental Investigation of Possible Memory Biases Affecting Support for Racial Health Care Policy

    PubMed Central

    Brunner, Ryan P.

    2012-01-01

    Objectives. We aimed to test the theory that estimates of racial disparities may be based on small recalled samples of specific individuals (Black vs White), a strategy likely to lead to underestimates of true racial disparities and a corresponding opposition to race-focused health care policies. Methods. We asked a sample of White adults to list the first 5 Black and White individuals who came to mind, and then measured support for various race-focused health care policies. Results. Analyses indicated that the Black individuals recalled by participants tended to be more famous and wealthy than their White counterparts. Furthermore, the tendency to list wealthier Black individuals predicted opposition to progressive racial health care programs. A follow-up study demonstrated that support for certain race-focused health care policies could be increased by informing Whites of potential memory biases. Conclusions. The survival and success of minority health care policies depend partially on public acceptance. Education regarding continuing racial disparities may help to increase support for race-focused health care policies. PMID:22420789

  3. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  4. Gender differences in leadership amongst first-year medical students in the small-group setting.

    PubMed

    Wayne, Nancy L; Vermillion, Michelle; Uijtdehaage, Sebastian

    2010-08-01

    To investigate the extent of gender bias in the volunteerism of small-group leaders amongst first-year medical students, and whether bias could be eliminated with special instructions to the students. The gender of leaders in small-group sessions in a real academic setting was monitored under two conditions: control conditions, in which basic instructions were provided to participants, and intervention conditions, in which the same basic instructions were provided plus a brief "pep talk" on the importance of experiencing a leadership role in a safe environment. During the small-group sessions, an observer noted the gender and names of group leaders for later analysis. After a class debriefing, a subset of leaders and nonleaders from both the control and intervention groups were invited to be interviewed about their perceptions of the small-group experience. Interviews were tape recorded and transcribed for analysis. In 2007-2008 and 2008-2009, disproportionately fewer women than men volunteered to become small-group leaders under control conditions. This gender bias was eliminated under intervention conditions. The interviews illustrated how a subtle change in instructions helped some female students take on a leadership role. Gender bias in leadership in the small-group setting amongst medical students-even when women make up half of the class-may persist without targeted intervention. The authors suggest that frequent and consistent intervention during medical school could be an important factor in encouraging women to identify themselves as leaders, promoting confidence to consider leadership roles in medicine.

  5. An assessment of the accuracy of stable Fe isotope ratio measurements on samples with organic and inorganic matrices by high-resolution multicollector ICP-MS

    NASA Astrophysics Data System (ADS)

    Schoenberg, Ronny; von Blanckenburg, Friedhelm

    2005-04-01

    Multicollector ICP-MS-based stable isotope procedures provide the capability to determine small variations in metal isotope composition of materials, but they are prone to substantial bias introduced by inadequate sample preparation. Such a "cryptic" bias is not necessarily identifiable from the measured isotope ratios. The analytical protocol for Fe isotope analyses of organic and inorganic materials described here identifies and avoids such pitfalls. In medium-mass resolution mode of the ThermoFinnigan Neptune MC-ICP-MS, a 1-ppm Fe solution with an uptake rate of 50-70 [mu]L min-1 yielded 3 × 10-11 A on 56Fe for the ThermoFinnigan stable introduction system and 1.2-1.8 × 10-10 A for the ESI Apex-Q uptake system. Sensitivity was increased again 3-5-fold when using Finnigan X-cones instead of the standard H-cones. The combination of the ESI Apex-Q apparatus and X-cones allowed the determination of the isotope composition on as little as 50 ng of Fe. Fe isotope compositions were corrected for mass bias with both the standard-sample bracketing (SSB) method, and by using the 65Cu/63Cu ratio of added synthetic copper (Cu-doping) as internal monitor of mass discrimination. Both methods provide identical results on high-purity Fe solutions of either synthetic or natural samples. We prefer the SSB method because of its shorter analysis time and more straightforward correction of instrumental mass bias compared to Cu-doping. Strong error correlations of the data are observed in three isotope diagrams. Thus, we suggest that the quality assessment in such diagrams should be performed with error ellipses rather than error bars. Reproducibility of [delta]56Fe, [delta]57Fe and [delta]58Fe values of natural samples alone is not a sufficient criterion for accuracy. A set of tests is lined out that identify cryptic matrix effects and ensure a reproducible level of quality control. Using these criteria and the SSB correction method, we determined the external reproducibilities for [delta]56Fe, [delta]57Fe and [delta]58Fe at the 95% confidence interval from 318 measurements of 95 natural samples to be 0.049, 0.071 and 0.28[per mille sign], respectively.

  6. Implications of weight-based stigma and self-bias on quality of life among individuals with Schizophrenia

    PubMed Central

    Barber, Jessica; Palmese, Laura; Reutenauer, Erin L.; Grilo, Carlos; Tek, Cenk

    2011-01-01

    Obesity has been associated with significant stigma and weight-related self-bias in community and clinical studies, but these issues have not been studied among individuals with schizophrenia. A consecutive series of 70 obese individuals with schizophrenia or schizoaffective disorder underwent assessment for perceptions of weight-based stigmatization, self-directed weight-bias, negative affect, medication compliance, and quality of life. Levels of weight-based stigmatization and self-bias were compared to levels reported for non-psychiatric overweight/obese samples. Weight measures were unrelated to stigma, self-bias, affect, and quality of life. Weight-based stigmatization was lower than published levels for non-psychiatric samples, whereas levels of weight-based self-bias did not differ. After controlling for negative affect, weight-based self-bias predicted an additional 11% of the variance in the quality of life measure. Individuals with schizophrenia and schizoaffective disorder reported weight-based self-bias to the same extent as non-psychiatric samples despite reporting less weight stigma. Weight-based self-bias was associated with poorer quality of life after controlling for negative affect. PMID:21716053

  7. Electric shielding films for biased TEM samples and their application to in situ electron holography.

    PubMed

    Nomura, Yuki; Yamamoto, Kazuo; Hirayama, Tsukasa; Saitoh, Koh

    2018-06-01

    We developed a novel sample preparation method for transmission electron microscopy (TEM) to suppress superfluous electric fields leaked from biased TEM samples. In this method, a thin TEM sample is first coated with an insulating amorphous aluminum oxide (AlOx) film with a thickness of about 20 nm. Then, the sample is coated with a conductive amorphous carbon film with a thickness of about 10 nm, and the film is grounded. This technique was applied to a model sample of a metal electrode/Li-ion-conductive-solid-electrolyte/metal electrode for biasing electron holography. We found that AlOx film with a thickness of 10 nm has a large withstand voltage of about 8 V and that double layers of AlOx and carbon act as a 'nano-shield' to suppress 99% of the electric fields outside of the sample. We also found an asymmetry potential distribution between high and low potential electrodes in biased solid-electrolyte, indicating different accumulation behaviors of lithium-ions (Li+) and lithium-ion vacancies (VLi-) in the biased solid-electrolyte.

  8. Performance appraisal of VAS radiometry for GOES-4, -5 and -6

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Robinson, W. D.

    1983-01-01

    The first three VISSR Atmospheric Sounders (VAS) were launched on GOES-4, -5, and -6 in 1980, 1981 and 1983. Postlaunch radiometric performance is assessed for noise, biases, registration and reliability, with special attention to calibration and problems in the data processing chain. The postlaunch performance of the VAS radiometer meets its prelaunch design specifications, particularly those related to image formation and noise reduction. The best instrument is carried on GOES-5, currently operational as GOES-EAST. Single sample noise is lower than expected, especially for the small longwave and large shortwave detectors. Detector to detector offsets are correctable to within the resolution limits of the instrument. Truncation, zero point and droop errors are insignificant. Absolute calibration errors, estimated from HIRS and from radiation transfer calculations, indicate moderate, but stable biases. Relative calibration errors from scanline to scanline are noticeable, but meet sounding requirements for temporarily and spatially averaged sounding fields of view. The VAS instrument is a potentially useful radiometer for mesoscale sounding operations. Image quality is very good. Soundings derived from quality controlled data meet prelaunch requirements when calculated with noise and bias resistant algorithms.

  9. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model

    PubMed Central

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-01-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543–2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic–Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  10. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    PubMed

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-05

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. © 2016 The Authors.

  11. Sampling Biases in MODIS and SeaWiFS Ocean Chlorophyll Data

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.

    2007-01-01

    Although modem ocean color sensors, such as MODIS and SeaWiFS are often considered global missions, in reality it takes many days, even months, to sample the ocean surface enough to provide complete global coverage. The irregular temporal sampling of ocean color sensors can produce biases in monthly and annual mean chlorophyll estimates. We quantified the biases due to sampling using data assimilation to create a "truth field", which we then sub-sampled using the observational patterns of MODIS and SeaWiFS. Monthly and annual mean chlorophyll estimates from these sub-sampled, incomplete daily fields were constructed and compared to monthly and annual means from the complete daily fields of the assimilation model, at a spatial resolution of 1.25deg longitude by 0.67deg latitude. The results showed that global annual mean biases were positive, reaching nearly 8% (MODIS) and >5% (SeaWiFS). For perspective the maximum interannual variability in the SeaWiFS chlorophyll record was about 3%. Annual mean sampling biases were low (<3%) in the midlatitudes (between -40deg and 40deg). Low interannual variability in the global annual mean sampling biases suggested that global scale trend analyses were valid. High latitude biases were much higher than the global annual means, up to 20% as a basin annual mean, and over 80% in some months. This was the result of the high solar zenith angle exclusion in the processing algorithms. Only data where the solar angle is <75deg are permitted, in contrast to the assimilation which samples regularly over the entire area and month. High solar zenith angles do not facilitate phytoplankton photosynthesis and consequently low chlorophyll concentrations occurring here are missed by the data sets. Ocean color sensors selectively sample in locations and times of favorable phytoplankton growth, producing overestimates of chlorophyll. The biases derived from lack of sampling in the high latitudes varied monthly, leading to artifacts in the apparent seasonal cycle from ocean color sensors. A false secondary peak in chlorophyll occurred in May-August, which resulted from lack of sampling in the Antarctic.

  12. Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.

    2011-04-01

    Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.

  13. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  14. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  15. Lessons learnt on biases and uncertainties in personal exposure measurement surveys of radiofrequency electromagnetic fields with exposimeters.

    PubMed

    Bolte, John F B

    2016-09-01

    Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Insights into Stability and Folding of GNRA and UNCG Tetraloops Revealed by Microsecond Molecular Dynamics and Well-Tempered Metadynamics.

    PubMed

    Haldar, Susanta; Kührová, Petra; Banáš, Pavel; Spiwok, Vojtěch; Šponer, Jiří; Hobza, Pavel; Otyepka, Michal

    2015-08-11

    RNA hairpins capped by 5'-GNRA-3' or 5'-UNCG-3' tetraloops (TLs) are prominent RNA structural motifs. Despite their small size, a wealth of experimental data, and recent progress in theoretical simulations of their structural dynamics and folding, our understanding of the folding and unfolding processes of these small RNA elements is still limited. Theoretical description of the folding and unfolding processes requires robust sampling, which can be achieved by either an exhaustive time scale in standard molecular dynamics simulations or sophisticated enhanced sampling methods, using temperature acceleration or biasing potentials. Here, we study structural dynamics of 5'-GNRA-3' and 5'-UNCG-3' TLs by 15-μs-long standard simulations and a series of well-tempered metadynamics, attempting to accelerate sampling by bias in a few chosen collective variables (CVs). Both methods provide useful insights. The unfolding and refolding mechanisms of the GNRA TL observed by well-tempered metadynamics agree with the (reverse) folding mechanism suggested by recent replica exchange molecular dynamics simulations. The orientation of the glycosidic bond of the GL4 nucleobase is critical for the UUCG TL folding pathway, and our data strongly support the hypothesis that GL4-anti forms a kinetic trap along the folding pathway. Along with giving useful insight, our study also demonstrates that using only a few CVs apparently does not capture the full folding landscape of the RNA TLs. Despite using several sophisticated selections of the CVs, formation of the loop appears to remain a hidden variable, preventing a full convergence of the metadynamics. Finally, our data suggest that the unfolded state might be overstabilized by the force fields used.

  17. Transgenerational effects of maternal depression on affect recognition in children.

    PubMed

    Kluczniok, Dorothea; Hindi Attar, Catherine; Fydrich, Thomas; Fuehrer, Daniel; Jaite, Charlotte; Domes, Gregor; Winter, Sibylle; Herpertz, Sabine C; Brunner, Romuald; Boedeker, Katja; Bermpohl, Felix

    2016-01-01

    The association between maternal depression and adverse emotional and behavioral outcomes in children is well established. One associated factor might be altered affect recognition which may be transmitted transgenerationally. Individuals with history of depression show biased recognition of sadness. Our aim was to investigate parallels in maternal and children's affect recognition with remitted depressed mothers. 60 Mother-child dyads completed an affect recognition morphing task. We examined two groups of remitted depressed mothers, with and without history of physical or sexual abuse, and a group of healthy mothers without history of physical or sexual abuse. Children were between 5 and 12 years old. Across groups, mothers identified happy faces fastest. Mothers with remitted depression showed a higher accuracy and response bias for sadness. We found corresponding results in their children. Maternal and children's bias and accuracy for sadness were positively correlated. Effects of remitted depression were found independent of maternal history of physical or sexual abuse. Our sample size was relatively small and further longitudinal research is needed to investigate how maternal and children's affect recognition are associated with behavioral and emotional outcomes in the long term. Our data suggest a negative processing bias in mothers with remitted depression which might represent both the perpetuation of and vulnerability to depression. Children of remitted depressed mothers appear to be exposed to this processing bias outside acute depressive episodes. This may promote the development of a corresponding processing bias in the children and could make children of depressed mothers more vulnerable to depressive disorders themselves. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Analysis of near-surface biases in ERA-Interim over the Canadian Prairies

    NASA Astrophysics Data System (ADS)

    Betts, Alan K.; Beljaars, Anton C. M.

    2017-09-01

    We quantify the biases in the diurnal cycle of temperature in ERA-Interim for both warm and cold season using hourly climate station data for four stations in Saskatchewan from 1979 to 2006. The warm season biases increase as opaque cloud cover decreases, and change substantially from April to October. The bias in mean temperature increases almost monotonically from small negative values in April to small positive values in the fall. Under clear skies, the bias in maximum temperature is of the order of -1°C in June and July, and -2°C in spring and fall; while the bias in minimum temperature increases almost monotonically from +1°C in spring to +2.5°C in October. The bias in the diurnal temperature range falls under clear skies from -2.5°C in spring to -5°C in fall. The cold season biases with surface snow have a different structure. The biases in maximum, mean and minimum temperature with a stable BL reach +1°C, +2.6°C and +3°C respectively in January under clear skies. The cold season bias in diurnal range increases from about -1.8°C in the fall to positive values in March. These diurnal biases in 2 m temperature and their seasonal trends are consistent with a high bias in both the diurnal and seasonal amplitude of the model ground heat flux, and a warm season daytime bias resulting from the model fixed leaf area index. Our results can be used as bias corrections in agricultural modeling that use these reanalysis data, and also as a framework for understanding model biases.

  19. Sampling bias in blending validation and a different approach to homogeneity assessment.

    PubMed

    Kraemer, J; Svensson, J R; Melgaard, H

    1999-02-01

    Sampling of batches studied for validation is reported. A thief particularly suited for granules, rather than cohesive powders, was used in the study. It is shown, as has been demonstrated in the past, that traditional 1x to 3x thief sampling of a blend is biased, and that the bias decreases as the sample size increases. It is shown that taking 50 samples of tablets after blending and testing this subpopulation for normality is a discriminating manner of testing for homogeneity. As a criterion, it is better than sampling at mixer or drum stage would be even if an unbiased sampling device were available.

  20. Exploring and accounting for publication bias in mental health: a brief overview of methods.

    PubMed

    Mavridis, Dimitris; Salanti, Georgia

    2014-02-01

    OBJECTIVE Publication bias undermines the integrity of published research. The aim of this paper is to present a synopsis of methods for exploring and accounting for publication bias. METHODS We discussed the main features of the following methods to assess publication bias: funnel plot analysis; trim-and-fill methods; regression techniques and selection models. We applied these methods to a well-known example of antidepressants trials that compared trials submitted to the Food and Drug Administration (FDA) for regulatory approval. RESULTS The funnel plot-related methods (visual inspection, trim-and-fill, regression models) revealed an association between effect size and SE. Contours of statistical significance showed that asymmetry in the funnel plot is probably due to publication bias. Selection model found a significant correlation between effect size and propensity for publication. CONCLUSIONS Researchers should always consider the possible impact of publication bias. Funnel plot-related methods should be seen as a means of examining for small-study effects and not be directly equated with publication bias. Possible causes for funnel plot asymmetry should be explored. Contours of statistical significance may help disentangle whether asymmetry in a funnel plot is caused by publication bias or not. Selection models, although underused, could be useful resource when publication bias and heterogeneity are suspected because they address directly the problem of publication bias and not that of small-study effects.

  1. Easy and accurate reconstruction of whole HIV genomes from short-read sequence data with shiver.

    PubMed

    Wymant, Chris; Blanquart, François; Golubchik, Tanya; Gall, Astrid; Bakker, Margreet; Bezemer, Daniela; Croucher, Nicholas J; Hall, Matthew; Hillebregt, Mariska; Ong, Swee Hoe; Ratmann, Oliver; Albert, Jan; Bannert, Norbert; Fellay, Jacques; Fransen, Katrien; Gourlay, Annabelle; Grabowski, M Kate; Gunsenheimer-Bartmeyer, Barbara; Günthard, Huldrych F; Kivelä, Pia; Kouyos, Roger; Laeyendecker, Oliver; Liitsola, Kirsi; Meyer, Laurence; Porter, Kholoud; Ristola, Matti; van Sighem, Ard; Berkhout, Ben; Cornelissen, Marion; Kellam, Paul; Reiss, Peter; Fraser, Christophe

    2018-01-01

    Studying the evolution of viruses and their molecular epidemiology relies on accurate viral sequence data, so that small differences between similar viruses can be meaningfully interpreted. Despite its higher throughput and more detailed minority variant data, next-generation sequencing has yet to be widely adopted for HIV. The difficulty of accurately reconstructing the consensus sequence of a quasispecies from reads (short fragments of DNA) in the presence of large between- and within-host diversity, including frequent indels, may have presented a barrier. In particular, mapping (aligning) reads to a reference sequence leads to biased loss of information; this bias can distort epidemiological and evolutionary conclusions. De novo assembly avoids this bias by aligning the reads to themselves, producing a set of sequences called contigs. However contigs provide only a partial summary of the reads, misassembly may result in their having an incorrect structure, and no information is available at parts of the genome where contigs could not be assembled. To address these problems we developed the tool shiver to pre-process reads for quality and contamination, then map them to a reference tailored to the sample using corrected contigs supplemented with the user's choice of existing reference sequences. Run with two commands per sample, it can easily be used for large heterogeneous data sets. We used shiver to reconstruct the consensus sequence and minority variant information from paired-end short-read whole-genome data produced with the Illumina platform, for sixty-five existing publicly available samples and fifty new samples. We show the systematic superiority of mapping to shiver's constructed reference compared with mapping the same reads to the closest of 3,249 real references: median values of 13 bases called differently and more accurately, 0 bases called differently and less accurately, and 205 bases of missing sequence recovered. We also successfully applied shiver to whole-genome samples of Hepatitis C Virus and Respiratory Syncytial Virus. shiver is publicly available from https://github.com/ChrisHIV/shiver.

  2. Bias in groundwater samples caused by wellbore flow

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1989-01-01

    Proper design of physical installations and sampling procedures for groundwater monitoring networks is critical for the detection and analysis of possible contaminants. Monitoring networks associated with known contaminant sources sometimes include an array of monitoring wells with long well screens. The purpose of this paper is: (a) to report the results of a numerical experiment indicating that significant borehole flow can occur within long well screens installed in homogeneous aquifers with very small head differences in the aquifer (less than 0.01 feet between the top and bottom of the screen); (b) to demonstrate that contaminant monitoring wells with long screens may completely fail to fulfill their purpose in many groundwater environments.

  3. Design and evaluation of a nondestructive fissile assay device for HTGR fuel samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNeany, S. R.; Knoll, R. W.; Jenkins, J. D.

    1979-02-01

    Nondestructive assay of fissile material plays an important role in nuclear fuel processing facilities. Information for product quality control, plant criticality safety, and nuclear materials accountability can be obtained from assay devices. All of this is necessary for a safe, efficient, and orderly operation of a production plant. Presented here is a design description and an operational evaluation of a device developed to nondestructively assay small samples of High-Temperature Gas-Cooled Reactor (HTGR) fuel. The measurement technique employed consists in thermal-neutron irradiation of a sample followed by pneumatic transfer to a high-efficiency neutron detector where delayed neutrons are counted. In general,more » samples undergo several irradiation and count cycles during a measurement. The total number of delayed-neutron counts accumulated is translated into grams of fissile mass through comparison with the counts accumulated in an identical irradiation and count sequence of calibration standards. Successful operation of the device through many experiments over a one-year period indicates high operational reliability. Tests of assay precision show this to be better than 0.25% for measurements of 10 min. Assay biases may be encountered if calibration standards are not representative of unknown samples, but reasonable care in construction and control of standards should lead to no more than 0.2% bias in the measurements. Nondestructive fissile assay of HTGR fuel samples by thermal-neutron irradiation and delayed-neutron detection has been demonstrated to be a rapid and accurate analysis technique. However, careful attention and control must be given to calibration standards to see that they remain representative of unknown samples.« less

  4. Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Eom, Young-Ho; Jo, Hang-Hyun

    2015-05-01

    Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.

  5. Poisson Statistics of Combinatorial Library Sampling Predict False Discovery Rates of Screening

    PubMed Central

    2017-01-01

    Microfluidic droplet-based screening of DNA-encoded one-bead-one-compound combinatorial libraries is a miniaturized, potentially widely distributable approach to small molecule discovery. In these screens, a microfluidic circuit distributes library beads into droplets of activity assay reagent, photochemically cleaves the compound from the bead, then incubates and sorts the droplets based on assay result for subsequent DNA sequencing-based hit compound structure elucidation. Pilot experimental studies revealed that Poisson statistics describe nearly all aspects of such screens, prompting the development of simulations to understand system behavior. Monte Carlo screening simulation data showed that increasing mean library sampling (ε), mean droplet occupancy, or library hit rate all increase the false discovery rate (FDR). Compounds identified as hits on k > 1 beads (the replicate k class) were much more likely to be authentic hits than singletons (k = 1), in agreement with previous findings. Here, we explain this observation by deriving an equation for authenticity, which reduces to the product of a library sampling bias term (exponential in k) and a sampling saturation term (exponential in ε) setting a threshold that the k-dependent bias must overcome. The equation thus quantitatively describes why each hit structure’s FDR is based on its k class, and further predicts the feasibility of intentionally populating droplets with multiple library beads, assaying the micromixtures for function, and identifying the active members by statistical deconvolution. PMID:28682059

  6. An approach to unbiased subsample interpolation for motion tracking.

    PubMed

    McCormick, Matthew M; Varghese, Tomy

    2013-04-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.

  7. Patterns of Unit and Item Nonresponse in the CAHPS® Hospital Survey

    PubMed Central

    Elliott, Marc N; Edwards, Carol; Angeles, January; Hambarsoomians, Katrin; Hays, Ron D

    2005-01-01

    Objective To examine the predictors of unit and item nonresponse, the magnitude of nonresponse bias, and the need for nonresponse weights in the Consumer Assessment of Health Care Providers and Systems (CAHPS®) Hospital Survey. Methods A common set of 11 administrative variables (41 degrees of freedom) was used to predict unit nonresponse and the rate of item nonresponse in multivariate models. Descriptive statistics were used to examine the impact of nonresponse on CAHPS Hospital Survey ratings and reports. Results Unit nonresponse was highest for younger patients and patients other than non-Hispanic whites (p<.001); item nonresponse increased steadily with age (p<.001). Fourteen of 20 reports of ratings of care had significant (p<.05) but small negative correlations with nonresponse weights (median −0.06; maximum −0.09). Nonresponse weights do not improve overall precision below sample sizes of 300–1,000, and are unlikely to improve the precision of hospital comparisons. In some contexts, case-mix adjustment eliminates most observed nonresponse bias. Conclusions Nonresponse weights should not be used for between-hospital comparisons of the CAHPS Hospital Survey, but may make small contributions to overall estimates or demographic comparisons, especially in the absence of case-mix adjustment. PMID:16316440

  8. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  10. Tailoring magnetic properties of Co nanocluster assembled films using hydrogen

    NASA Astrophysics Data System (ADS)

    Romero, C. P.; Volodin, A.; Paddubrouskaya, H.; Van Bael, M. J.; Van Haesendonck, C.; Lievens, P.

    2018-07-01

    Tailoring magnetic properties in nanocluster assembled cobalt (Co) thin films was achieved by admitting a small percentage of H2 gas (∼2%) into the Co gas phase cluster formation chamber prior to deposition. The oxygen content in the films is considerably reduced by the presence of hydrogen during the cluster formation, leading to enhanced magnetic interactions between clusters. Two sets of Co samples were fabricated, one without hydrogen gas and one with hydrogen gas. Magnetic properties of the non-hydrogenated and the hydrogen-treated Co nanocluster assembled films are comparatively studied using magnetic force microscopy and vibrating sample magnetometry. When comparing the two sets of samples the considerably larger coercive field of the H2-treated Co nanocluster film and the extended micrometer-sized magnetic domain structure confirm the enhancement of magnetic interactions between clusters. The thickness of the antiferromagnetic CoO layer is controlled with this procedure and modifies the exchange bias effect in these films. The exchange bias shift is lower for the H2-treated Co nanocluster film, which indicates that a thinner antiferromagnetic CoO reduces the coupling with the ferromagnetic Co. The hydrogen-treatment method can be used to tailor the oxidation levels thus controlling the magnetic properties of ferromagnetic cluster-assembled films.

  11. An integrated modeling approach to estimating Gunnison Sage-Grouse population dynamics: combining index and demographic data.

    USGS Publications Warehouse

    Davis, Amy J.; Hooten, Mevin B.; Phillips, Michael L.; Doherty, Paul F.

    2014-01-01

    Evaluation of population dynamics for rare and declining species is often limited to data that are sparse and/or of poor quality. Frequently, the best data available for rare bird species are based on large-scale, population count data. These data are commonly based on sampling methods that lack consistent sampling effort, do not account for detectability, and are complicated by observer bias. For some species, short-term studies of demographic rates have been conducted as well, but the data from such studies are typically analyzed separately. To utilize the strengths and minimize the weaknesses of these two data types, we developed a novel Bayesian integrated model that links population count data and population demographic data through population growth rate (λ) for Gunnison sage-grouse (Centrocercus minimus). The long-term population index data available for Gunnison sage-grouse are annual (years 1953–2012) male lek counts. An intensive demographic study was also conducted from years 2005 to 2010. We were able to reduce the variability in expected population growth rates across time, while correcting for potential small sample size bias in the demographic data. We found the population of Gunnison sage-grouse to be variable and slightly declining over the past 16 years.

  12. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    PubMed

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  13. Common component classification: what can we learn from machine learning?

    PubMed

    Anderson, Ariana; Labus, Jennifer S; Vianna, Eduardo P; Mayer, Emeran A; Cohen, Mark S

    2011-05-15

    Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Evaluation of the agreement among three handheld blood glucose meters and a laboratory blood analyzer for measurement of blood glucose concentration in Hispaniolan Amazon parrots (Amazona ventralis).

    PubMed

    Acierno, Mark J; Mitchell, Mark A; Schuster, Patricia J; Freeman, Diana; Sanchez-Migallon Guzman, David; Tully, Thomas N

    2009-02-01

    To determine the degree of agreement between 3 commercially available point-of-care blood glucose meters and a laboratory analyzer for measurement of blood glucose concentrations in Hispaniolan Amazon parrots (Amazona ventralis). 20 healthy adult Hispaniolan Amazon parrots. A 26-gauge needle and 3-mL syringe were used to obtain a blood sample (approx 0.5 mL) from a jugular vein of each parrot. Small volumes of blood (0.6 to 1.5 microL) were used to operate each of the blood glucose meters, and the remainder was placed into lithium heparin microtubes and centrifuged. Plasma was harvested and frozen at -30 degrees C. Within 5 days after collection, plasma samples were thawed and plasma glucose concentrations were measured by means of the laboratory analyzer. Agreement between pairs of blood glucose meters and between each blood glucose meter and the laboratory analyzer was evaluated by means of the Bland-Altman method, and limits of agreement (LOA) were calculated. None of the results of the 3 blood glucose meters agreed with results of the laboratory analyzer. Each point-of-care blood glucose meter underestimated the blood glucose concentration, and the degree of negative bias was not consistent (meter A bias, -94.9 mg/dL [LOA, -148.0 to -41.7 mg/dL]; meter B bias, -52 mg/dL [LOA, -107.5 to 3.5 mg/dL]; and meter C bias, -78.9 mg/dL [LOA, -137.2 to -20.6 mg/dL]). On the basis of these results, use of handheld blood glucose meters in the diagnosis or treatment of Hispaniolan Amazon parrots and other psittacines cannot be recommended.

  15. Attrition analysed in five waves of a longitudinal yearly survey of smokers: findings from the ITC Netherlands survey.

    PubMed

    Zethof, Dennis; Nagelhout, Gera E; de Rooij, Mark; Driezen, Pete; Fong, Geoffrey T; van den Putte, Bas; Hummel, Karin; de Vries, Hein; Thompson, Mary E; Willemsen, Marc C

    2016-08-01

    Attrition bias can affect the external validity of findings. This article analyses attrition bias and assesses the effectiveness of replenishment samples on demographic and smoking-related characteristics for the International Tobacco Control Netherlands Survey, a longitudinal survey among smokers. Attrition analyses were conducted for the first five survey waves (2008-12). We assessed, including and excluding replenishment samples, whether the demographic composition of the samples changed between the first and fifth waves. Replenishment samples were tailored to ensure the sample remained representative of the smoking population. We also constructed a multivariable survival model of attrition that included all five waves with replenishment samples. Of the original 1820 respondents recruited in 2008, 46% participated again in 2012. Demographic differences between waves due to attrition were generally small and replenishment samples tended to minimize them further. The multivariable survival analysis revealed that only two of the 10 variables analysed were significant predictors of attrition: a weak effect for gender (men dropped out more often) and weak to moderate effects for age (respondents aged 15-24 years dropped out more than aged 25-39 years, who dropped out more than those aged 40+ years). Weak to moderate attrition effects were found for men and younger age groups. This information could be used to minimize respondent attrition. Our findings suggest that sampling weights and tailored replenishment samples can effectively compensate for attrition effects. This is already being done for the International Tobacco Control Netherlands Survey, including the categories that significantly predicted attrition in this study. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  16. The impact of sampling, PCR, and sequencing replication on discerning changes in drinking water bacterial community over diurnal time-scales.

    PubMed

    Bautista-de Los Santos, Quyen Melina; Schroeder, Joanna L; Blakemore, Oliver; Moses, Jonathan; Haffey, Mark; Sloan, William; Pinto, Ameet J

    2016-03-01

    High-throughput and deep DNA sequencing, particularly amplicon sequencing, is being increasingly utilized to reveal spatial and temporal dynamics of bacterial communities in drinking water systems. Whilst the sampling and methodological biases associated with PCR and sequencing have been studied in other environments, they have not been quantified for drinking water. These biases are likely to have the greatest effect on the ability to characterize subtle spatio-temporal patterns influenced by process/environmental conditions. In such cases, intra-sample variability may swamp any underlying small, systematic variation. To evaluate this, we undertook a study with replication at multiple levels including sampling sites, sample collection, PCR amplification, and high throughput sequencing of 16S rRNA amplicons. The variability inherent to the PCR amplification and sequencing steps is significant enough to mask differences between bacterial communities from replicate samples. This was largely driven by greater variability in detection of rare bacteria (relative abundance <0.01%) across PCR/sequencing replicates as compared to replicate samples. Despite this, we captured significant changes in bacterial community over diurnal time-scales and find that the extent and pattern of diurnal changes is specific to each sampling location. Further, we find diurnal changes in bacterial community arise due to differences in the presence/absence of the low abundance bacteria and changes in the relative abundance of dominant bacteria. Finally, we show that bacterial community composition is significantly different across sampling sites for time-periods during which there are typically rapid changes in water use. This suggests hydraulic changes (driven by changes in water demand) contribute to shaping the bacterial community in bulk drinking water over diurnal time-scales. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Estimating Dungeness crab (Cancer magister) abundance: Crab pots and dive transects compared

    USGS Publications Warehouse

    Taggart, S. James; O'Clair, Charles E.; Shirley, Thomas C.; Mondragon, Jennifer

    2004-01-01

    Dungeness crabs (Cancer magister) were sampled with commercial pots and counted by scuba divers on benthic transects at eight sites near Glacier Bay, Alaska. Catch per unit of effort (CPUE) from pots was compared to the density estimates from dives to evaluate the bias and power of the two techniques. Yearly sampling was conducted in two seasons: April and September, from 1992 to 2000. Male CPUE estimates from pots were significantly lower in April than in the following September; a step-wise regression demonstrated that season accounted for more of the variation in male CPUE than did temperature. In both April and September, pot sampling was significantly biased against females. When females were categorized as ovigerous and nonovigerous, it was clear that ovigerous females accounted for the majority of the bias because pots were not biased against nonovigerous females. We compared the power of pots and dive transects in detecting trends in populations and found that pots had much higher power than dive transects. Despite their low power, the dive transects were very useful for detecting bias in our pot sampling and in identifying the optimal times of year to sample so that pot bias could be avoided.

  18. Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field

    NASA Astrophysics Data System (ADS)

    Cameron, E.; Driver, S. P.

    2009-01-01

    Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.

  19. Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo

    NASA Astrophysics Data System (ADS)

    Arya, Gaurav; Schlick, Tamar

    2007-01-01

    We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.

  20. Strong orientation dependence of surface mass density profiles of dark haloes at large scales

    NASA Astrophysics Data System (ADS)

    Osato, Ken; Nishimichi, Takahiro; Oguri, Masamune; Takada, Masahiro; Okumura, Teppei

    2018-06-01

    We study the dependence of surface mass density profiles, which can be directly measured by weak gravitational lensing, on the orientation of haloes with respect to the line-of-sight direction, using a suite of N-body simulations. We find that, when major axes of haloes are aligned with the line-of-sight direction, surface mass density profiles have higher amplitudes than those averaged over all halo orientations, over all scales from 0.1 to 100 Mpc h-1 we studied. While the orientation dependence at small scales is ascribed to the halo triaxiality, our results indicate even stronger orientation dependence in the so-called two-halo regime, up to 100 Mpc h-1. The orientation dependence for the two-halo term is well approximated by a multiplicative shift of the amplitude and therefore a shift in the halo bias parameter value. The halo bias from the two-halo term can be overestimated or underestimated by up to ˜ 30 per cent depending on the viewing angle, which translates into the bias in estimated halo masses by up to a factor of 2 from halo bias measurements. The orientation dependence at large scales originates from the anisotropic halo-matter correlation function, which has an elliptical shape with the axis ratio of ˜0.55 up to 100 Mpc h-1. We discuss potential impacts of halo orientation bias on other observables such as optically selected cluster samples and a clustering analysis of large-scale structure tracers such as quasars.

  1. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.

  2. Nonlinear vs. linear biasing in Trp-cage folding simulations

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-01

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  3. Nonlinear vs. linear biasing in Trp-cage folding simulations.

    PubMed

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  4. Information Repetition in Evaluative Judgments: Easy to Monitor, Hard to Control

    ERIC Educational Resources Information Center

    Unkelbach, Christian; Fiedler, Klaus; Freytag, Peter

    2007-01-01

    The sampling approach [Fiedler, K. (2000a). "Beware of samples! A cognitive-ecological sampling approach to judgment biases." "Psychological Review, 107"(4), 659-676.] attributes judgment biases to the information given in a sample. Because people usually do not monitor the constraints of samples and do not control their judgments accordingly,…

  5. Linking well-tempered metadynamics simulations with experiments.

    PubMed

    Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele

    2010-05-19

    Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  7. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  8. Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Fan, Xitao

    This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…

  9. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  10. Collision rates and impact velocities in the Main Asteroid Belt

    NASA Technical Reports Server (NTRS)

    Farinella, Paolo; Davis, Donald R.

    1992-01-01

    Wetherill's (1967) algorithm is presently used to compute the mutual collision probabilities and impact velocities of a set of 682 asteroids with large-than-50-km radius representative of a bias-free sample of asteroid orbits. While collision probabilities are nearly independent of eccentricities, a significant decrease is associated with larger inclinations. Collisional velocities grow steeply with orbital eccentricity and inclination, but with curiously small variation across the asteroid belt. Family asteroids are noted to undergo collisions with other family members 2-3 times more often than with nonmembers.

  11. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  12. Researchers’ Intuitions About Power in Psychological Research

    PubMed Central

    Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.

    2016-01-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203

  13. Researchers' Intuitions About Power in Psychological Research.

    PubMed

    Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J

    2016-08-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.

  14. Spectra from the IRS of Bright Oxygen-Rich Evolved Stars in the SMC

    NASA Astrophysics Data System (ADS)

    Kraemer, Kathleen E.; Sloan, Greg; Wood, Peter

    2016-06-01

    We have used Spitzer's Infrared Spectrograph (IRS) to obtain spectra of stars in the Small Magellanic Cloud (SMC). The targets were chosen from the Point Source Catalog of the Mid-Course Space Experiment (MSX), which detected the 243 brightest infrared sources in the SMC. Our SMC sample of oxygen-rich evolved stars shows more dust than found in previous samples, and the dust tends to be dominated by silicates, with little contribution from alumina. Both results may arise from the selection bias in the MSX sample and our sample toward more massive stars. Additionally, several sources show peculiar spectral features such as PAHs, crystalline silicates, or both carbon-rich and silicate features. The spectrum of one source, MSX SMC 145, is a combination of an ordinary AGB star and a background galaxy at z~0.16, rather than an OH/IR star as previously suggested.

  15. Bias-induced conformational switching of supramolecular networks of trimesic acid at the solid-liquid interface

    NASA Astrophysics Data System (ADS)

    Ubink, J.; Enache, M.; Stöhr, M.

    2018-05-01

    Using the tip of a scanning tunneling microscope, an electric field-induced reversible phase transition between two planar porous structures ("chickenwire" and "flower") of trimesic acid was accomplished at the nonanoic acid/highly oriented pyrolytic graphite interface. The chickenwire structure was exclusively observed for negative sample bias, while for positive sample bias only the more densely packed flower structure was found. We suggest that the slightly negatively charged carboxyl groups of the trimesic acid molecule are the determining factor for this observation: their adsorption behavior varies with the sample bias and is thus responsible for the switching behavior.

  16. Sources of Sampling Bias in Long-Screened Well

    EPA Science Inventory

    Results obtained from ground-water sampling in long-screened wells are often influenced by physical factors such as geologic heterogeneity and vertical hydraulic gradients. These factors often serve to bias results and increase uncertainty in the representativeness of the sample...

  17. Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage.

    PubMed

    Walsh, Matthew C; Trentham-Dietz, Amy; Gangnon, Ronald E; Nieto, F Javier; Newcomb, Polly A; Palta, Mari

    2012-06-01

    Increasing numbers of individuals are choosing to opt out of population-based sampling frames due to privacy concerns. This is especially a problem in the selection of controls for case-control studies, as the cases often arise from relatively complete population-based registries, whereas control selection requires a sampling frame. If opt out is also related to risk factors, bias can arise. We linked breast cancer cases who reported having a valid driver's license from the 2004-2008 Wisconsin women's health study (N = 2,988) with a master list of licensed drivers from the Wisconsin Department of Transportation (WDOT). This master list excludes Wisconsin drivers that requested their information not be sold by the state. Multivariate-adjusted selection probability ratios (SPR) were calculated to estimate potential bias when using this driver's license sampling frame to select controls. A total of 962 cases (32%) had opted out of the WDOT sampling frame. Cases age <40 (SPR = 0.90), income either unreported (SPR = 0.89) or greater than $50,000 (SPR = 0.94), lower parity (SPR = 0.96 per one-child decrease), and hormone use (SPR = 0.93) were significantly less likely to be covered by the WDOT sampling frame (α = 0.05 level). Our results indicate the potential for selection bias due to differential opt out between various demographic and behavioral subgroups of controls. As selection bias may differ by exposure and study base, the assessment of potential bias needs to be ongoing. SPRs can be used to predict the direction of bias when cases and controls stem from different sampling frames in population-based case-control studies.

  18. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  19. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  20. Nutrient and pesticide contamination bias estimated from field blanks collected at surface-water sites in U.S. Geological Survey Water-Quality Networks, 2002–12

    USGS Publications Warehouse

    Medalie, Laura; Martin, Jeffrey D.

    2017-08-14

    Potential contamination bias was estimated for 8 nutrient analytes and 40 pesticides in stream water collected by the U.S. Geological Survey at 147 stream sites from across the United States, and representing a variety of hydrologic conditions and site types, for water years 2002–12. This study updates previous U.S. Geological Survey evaluations of potential contamination bias for nutrients and pesticides. Contamination is potentially introduced to water samples by exposure to airborne gases and particulates, from inadequate cleaning of sampling or analytic equipment, and from inadvertent sources during sample collection, field processing, shipment, and laboratory analysis. Potential contamination bias, based on frequency and magnitude of detections in field blanks, is used to determine whether or under what conditions environmental data might need to be qualified for the interpretation of results in the context of comparisons with background levels, drinking-water standards, aquatic-life criteria or benchmarks, or human-health benchmarks. Environmental samples for which contamination bias as determined in this report applies are those from historical U.S. Geological Survey water-quality networks or programs that were collected during the same time frame and according to the same protocols and that were analyzed in the same laboratory as field blanks described in this report.Results from field blanks for ammonia, nitrite, nitrite plus nitrate, orthophosphate, and total phosphorus were partitioned by analytical method; results from the most commonly used analytical method for total phosphorus were further partitioned by date. Depending on the analytical method, 3.8, 9.2, or 26.9 percent of environmental samples, the last of these percentages pertaining to all results from 2007 through 2012, were potentially affected by ammonia contamination. Nitrite contamination potentially affected up to 2.6 percent of environmental samples collected between 2002 and 2006 and affected about 3.3 percent of samples collected between 2007 and 2012. The percentages of environmental samples collected between 2002 and 2011 that were potentially affected by nitrite plus nitrate contamination were 7.3 for samples analyzed with the low-level method and 0.4 for samples analyzed with the standard-level method. These percentages increased to 14.8 and 2.2 for samples collected in 2012 and analyzed using replacement low- and standard-level methods, respectively. The maximum potentially affected concentrations for nitrite and for nitrite plus nitrate were much less than their respective maximum contamination levels for drinking-water standards. Although contamination from particulate nitrogen can potentially affect up to 21.2 percent and that from total Kjeldahl nitrogen can affect up to 16.5 percent of environmental samples, there are no critical or background levels for these substances.For total nitrogen, orthophosphate, and total phosphorus, contamination in a small percentage of environmental samples might be consequential for comparisons relative to impairment risks or background levels. At the low ends of the respective ranges of impairment risk for these nutrients, contamination in up to 5 percent of stream samples could account for at least 23 percent of measured concentrations of total nitrogen, for at least 40 or 90 percent of concentrations of orthophosphate, depending on the analytical method, and for 31 to 76 percent of concentrations of total phosphorus, depending on the time period.Twenty-six pesticides had no detections in field blanks. Atrazine with 12 and metolachlor with 11 had the highest number of detections, mostly occurring in spring or early summer. At a 99-percent level of confidence, contamination was estimated to be no greater than the detection limit in at least 98 percent of all samples for 38 of 40 pesticides. For metolachlor and atrazine, potential contamination was no greater than 0.0053 and 0.0093 micrograms per liter in 98 percent of samples. For 11 of 14 pesticides with at least one detection, the maximum potentially affected concentration of the environmental sample was less than their respective human-health or aquatic-life benchmarks. Small percentages of environmental samples had concentrations high enough that atrazine contamination potentially could account for the entire aquatic-life benchmark for acute effects on nonvascular plants, that dieldrin contamination could account for up to 100 percent of the cancer health-based screening level, or that chlorpyrifos contamination could account for 13 or 12 percent of the concentrations in the aquatic-life benchmarks for chronic effects on invertebrates or the criterion continuous concentration for chronic effects on aquatic life.

  1. How and how much does RAD-seq bias genetic diversity estimates?

    PubMed

    Cariou, Marie; Duret, Laurent; Charlat, Sylvain

    2016-11-08

    RAD-seq is a powerful tool, increasingly used in population genomics. However, earlier studies have raised red flags regarding possible biases associated with this technique. In particular, polymorphism on restriction sites results in preferential sampling of closely related haplotypes, so that RAD data tends to underestimate genetic diversity. Here we (1) clarify the theoretical basis of this bias, highlighting the potential confounding effects of population structure and selection, (2) confront predictions to real data from in silico digestion of full genomes and (3) provide a proof of concept toward an ABC-based correction of the RAD-seq bias. Under a neutral and panmictic model, we confirm the previously established relationship between the true polymorphism and its RAD-based estimation, showing a more pronounced bias when polymorphism is high. Using more elaborate models, we show that selection, resulting in heterogeneous levels of polymorphism along the genome, exacerbates the bias and leads to a more pronounced underestimation. On the contrary, spatial genetic structure tends to reduce the bias. We confront the neutral and panmictic model to "ideal" empirical data (in silico RAD-sequencing) using full genomes from natural populations of the fruit fly Drosophila melanogaster and the fungus Shizophyllum commune, harbouring respectively moderate and high genetic diversity. In D. melanogaster, predictions fit the model, but the small difference between the true and RAD polymorphism makes this comparison insensitive to deviations from the model. In the highly polymorphic fungus, the model captures a large part of the bias but makes inaccurate predictions. Accordingly, ABC corrections based on this model improve the estimations, albeit with some imprecisions. The RAD-seq underestimation of genetic diversity associated with polymorphism in restriction sites becomes more pronounced when polymorphism is high. In practice, this means that in many systems where polymorphism does not exceed 2 %, the bias is of minor importance in the face of other sources of uncertainty, such as heterogeneous bases composition or technical artefacts. The neutral panmictic model provides a practical mean to correct the bias through ABC, albeit with some imprecisions. More elaborate ABC methods might integrate additional parameters, such as population structure and selection, but their opposite effects could hinder accurate corrections.

  2. A Stellar Dynamical Black Hole Mass for the Reverberation Mapped AGN NGC 5273

    NASA Astrophysics Data System (ADS)

    Batiste, Merida; Bentz, Misty C.; Valluri, Monica; Onken, Christopher A.

    2018-01-01

    We present preliminary results from stellar dynamical modeling of the mass of the central super-massive black hole (MBH) in the active galaxy NGC 5273. NGC 5273 is one of the few AGN with a secure MBH measurement from reverberation-mapping that is also nearby enough to measure MBH with stellar dynamical modeling. Dynamical modeling and reverberation-mapping are the two most heavily favored methods of direct MBH determination in the literature, however the specific limitations of each method means that there are very few galaxies for which both can be used. To date only two such galaxies, NGC 3227 and NGC 4151, have MBH determinations from both methods. Given this small sample size, it is not yet clear that the two methods give consistent results. Moreover, given the inherent uncertainties and potential systematic biases in each method, it is likewise unclear whether one method should be preferred over the other. This study is part of an ongoing project to increase the sample of galaxies with secure MBH measurements from both methods, so that a direct comparison may be made. NGC 5273 provides a particularly valuable comparison because it is free of kinematic substructure (e.g. the presence of a bar, as is the case for NGC 4151) which can complicate and potentially bias results from stellar dynamical modeling. I will discuss our current results as well as the advantages and limitations of each method, and the potential sources of systematic bias that may affect comparison between results.

  3. Potential roadside biases due to habitat changes along Breeding Bird Survey routes

    USGS Publications Warehouse

    Keller, C.M.E.; Scallan, J.T.

    1999-01-01

    Breeding Bird Surveys (BBS) are conducted along roadside routes to enable a large geographic area to be surveyed. Yet the potential biases of sampling populations only from roadsides have received little attention. We sampled aerial photography of BBS routes taken in the mid-1960s and late-1980s to evaluate whether habitat changes that occurred along roadsides were also occurring in the surrounding area, and whether the frequency of habitats encountered along roadsides were similar to that off-route. We examined 28 routes in Maryland and 25 routes in Ohio, and defined roadside area as within 200 m of the road, and off-route as 200-1,600 m from the road. Most habitat changes that occurred along BBS roadsides also were occurring in the off-route areas. However, increases in urban cover was significantly greater along the road in Maryland where urbanization of farmland was the predominant habitat change. The small increase in urban cover in Ohio was not significantly greater along the road. Construction of single family homes was greater along BBS roadsides in both states. In Ohio, the greatest change in habitat was the conversion of farmland back to forest, which was not significantly greater along the road. Changes associated with urbanization were more biased towards roadsides than the reforestation of farmland. Within one time period, roadside areas had less forest and more agricultural and urban cover types than occurred off-route.

  4. Small-molecule-biased formyl peptide receptor agonist compound 17b protects against myocardial ischaemia-reperfusion injury in mice

    PubMed Central

    Qin, Cheng Xue; May, Lauren T.; Li, Renming; Cao, Nga; Rosli, Sarah; Deo, Minh; Alexander, Amy E.; Horlock, Duncan; Bourke, Jane E.; Yang, Yuan H.; Stewart, Alastair G.; Kaye, David M.; Du, Xiao-Jun; Sexton, Patrick M.; Christopoulos, Arthur; Gao, Xiao-Ming; Ritchie, Rebecca H.

    2017-01-01

    Effective treatment for managing myocardial infarction (MI) remains an urgent, unmet clinical need. Formyl peptide receptors (FPR) regulate inflammation, a major contributing mechanism to cardiac injury following MI. Here we demonstrate that FPR1/FPR2-biased agonism may represent a novel therapeutic strategy for the treatment of MI. The small-molecule FPR1/FPR2 agonist, Compound 17b (Cmpd17b), exhibits a distinct signalling fingerprint to the conventional FPR1/FPR2 agonist, Compound-43 (Cmpd43). In Chinese hamster ovary (CHO) cells stably transfected with human FPR1 or FPR2, Compd17b is biased away from potentially detrimental FPR1/2-mediated calcium mobilization, but retains the pro-survival signalling, ERK1/2 and Akt phosphorylation, relative to Compd43. The pathological importance of the biased agonism of Cmpd17b is demonstrable as superior cardioprotection in both in vitro (cardiomyocytes and cardiofibroblasts) and MI injury in mice in vivo. These findings reveal new insights for development of small molecule FPR agonists with an improved cardioprotective profile for treating MI. PMID:28169296

  5. Analysing home-ownership of couples: the effect of selecting couples at the time of the survey.

    PubMed

    Mulder, C H

    1996-09-01

    "The analysis of events encountered by couple and family households may suffer from sample selection bias when data are restricted to couples existing at the moment of interview. The paper discusses the effect of sample selection bias on event history analyses of buying a home [in the Netherlands] by comparing analyses performed on a sample of existing couples with analyses of a more complete sample including past as well as current partner relationships. The results show that, although home-buying in relationships that have ended differs clearly from behaviour in existing relationships, sample selection bias is not alarmingly large." (SUMMARY IN FRE) excerpt

  6. Effects of life-state on detectability in a demographic study of the terrestrial orchid Cleistes bifaria

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2003-01-01

    1. Most plant demographic studies follow marked individuals in permanent plots. Plots tend to be small, so detectability is assumed to be one for every individual. However, detectability could be affected by factors such as plant traits, time, space, observer, previous detection, biotic interactions, and especially by life-state. 2. We used a double-observer survey and closed population capture-recapture modelling to estimate state-specific detectability of the orchid Cleistes bifaria in a long-term study plot of 41.2 m2. Based on AICc model selection, detectability was different for each life-state and for tagged vs. previously untagged plants. There were no differences in detectability between the two observers. 3. Detectability estimates (SE) for one-leaf vegetative, two-leaf vegetative, and flowering/fruiting states correlated with mean size of these states and were 0.76 (0.05), 0.92 (0.06), and 1 (0.00), respectively, for previously tagged plants, and 0.84 (0.08), 0.75 (0.22), and 0 (0.00), respectively, for previously untagged plants. (We had insufficient data to obtain a satisfactory estimate of previously untagged flowering plants). 4. Our estimates are for a medium-sized plant in a small and intensively surveyed plot. It is possible that detectability is even lower for larger plots and smaller plants or smaller life-states (e.g. seedlings) and that detectabilities < 1 are widespread in plant demographic studies. 5. State-dependent detectabilities are especially worrying since they will lead to a size- or state-biased sample from the study plot. Failure to incorporate detectability into demographic estimation methods introduces a bias into most estimates of population parameters such as fecundity, recruitment, mortality, and transition rates between life-states. We illustrate this by a simple example using a matrix model, where a hypothetical population was stable but, due to imperfect detection, wrongly projected to be declining at a rate of 8% per year. 6. Almost all plant demographic studies are based on models for discrete states. State and size are important predictors both for demographic rates and detectability. We suggest that even in studies based on small plots, state- or size-specific detectability should be estimated at least at some point to avoid biased inference about the dynamics of the population sampled.

  7. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  8. Evaluation on the use of cerium in the NBL Titrimetric Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zebrowski, J.P.; Orlowicz, G.J.; Johnson, K.D.

    An alternative to potassium dichromate as titrant in the New Brunswick Laboratory Titrimetric Method for uranium analysis was sought since chromium in the waste makes disposal difficult. Substitution of a ceric-based titrant was statistically evaluated. Analysis of the data indicated statistically equivalent precisions for the two methods, but a significant overall bias of +0.035% for the ceric titrant procedure. The cause of the bias was investigated, alterations to the procedure were made, and a second statistical study was performed. This second study revealed no statistically significant bias, nor any analyst-to-analyst variation in the ceric titration procedure. A statistically significant day-to-daymore » variation was detected, but this was physically small (0.01 5%) and was only detected because of the within-day precision of the method. The added mean and standard deviation of the %RD for a single measurement was found to be 0.031%. A comparison with quality control blind dichromate titration data again indicated similar overall precision. Effects of ten elements on the ceric titration`s performance was determined. Co, Ti, Cu, Ni, Na, Mg, Gd, Zn, Cd, and Cr in previous work at NBL these impurities did not interfere with the potassium dichromate titrant. This study indicated similar results for the ceric titrant, with the exception of Ti. All the elements (excluding Ti and Cr), caused no statistically significant bias in uranium measurements at levels of 10 mg impurity per 20-40 mg uranium. The presence of Ti was found to cause a bias of {minus}0.05%; this is attributed to the presence of sulfate ions, resulting in precipitation of titanium sulfate and occlusion of uranium. A negative bias of 0.012% was also statistically observed in the samples containing chromium impurities.« less

  9. Risk of Preterm or Small-for-Gestational-Age Birth After Influenza Vaccination During Pregnancy: Caveats When Conducting Retrospective Observational Studies

    PubMed Central

    Vazquez-Benitez, Gabriela; Kharbanda, Elyse O.; Naleway, Allison L.; Lipkind, Heather; Sukumaran, Lakshmi; McCarthy, Natalie L.; Omer, Saad B.; Qian, Lei; Xu, Stanley; Jackson, Michael L.; Vijayadev, Vinutha; Klein, Nicola P.; Nordin, James D.

    2016-01-01

    Vaccines are increasingly targeted toward women of reproductive age, and vaccines to prevent influenza and pertussis are recommended during pregnancy. Prelicensure clinical trials typically have not included pregnant women, and when they are included, trials cannot detect rare events. Thus, postmarketing vaccine safety assessments are necessary. However, analysis of observational data requires detailed assessment of potential biases. Using data from 8 Vaccine Safety Datalink sites in the United States, we analyzed the association of monovalent H1N1 influenza vaccine (MIV) during pregnancy with preterm birth (<37 weeks) and small-for-gestational-age birth (birth weight < 10th percentile). The cohort included 46,549 pregnancies during 2009–2010 (40% of participants received the MIV). We found potential biases in the vaccine–birth outcome association that might occur due to variable access to vaccines, the time-dependent nature of exposure to vaccination within pregnancy (immortal time bias), and confounding from baseline differences between vaccinated and unvaccinated women. We found a strong protective effect of vaccination on preterm birth (relative risk = 0.79, 95% confidence interval: 0.74, 0.85) when we ignored potential biases and no effect when accounted for them (relative risk = 0.91; 95% confidence interval: 0.83, 1.0). In contrast, we found no important biases in the association of MIV with small-for-gestational-age birth. Investigators conducting studies to evaluate birth outcomes after maternal vaccination should use statistical approaches to minimize potential biases. PMID:27449414

  10. Effect of Malmquist bias on correlation studies with IRAS data base

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  11. Random sampling causes the low reproducibility of rare eukaryotic OTUs in Illumina COI metabarcoding.

    PubMed

    Leray, Matthieu; Knowlton, Nancy

    2017-01-01

    DNA metabarcoding, the PCR-based profiling of natural communities, is becoming the method of choice for biodiversity monitoring because it circumvents some of the limitations inherent to traditional ecological surveys. However, potential sources of bias that can affect the reproducibility of this method remain to be quantified. The interpretation of differences in patterns of sequence abundance and the ecological relevance of rare sequences remain particularly uncertain. Here we used one artificial mock community to explore the significance of abundance patterns and disentangle the effects of two potential biases on data reproducibility: indexed PCR primers and random sampling during Illumina MiSeq sequencing. We amplified a short fragment of the mitochondrial Cytochrome c Oxidase Subunit I (COI) for a single mock sample containing equimolar amounts of total genomic DNA from 34 marine invertebrates belonging to six phyla. We used seven indexed broad-range primers and sequenced the resulting library on two consecutive Illumina MiSeq runs. The total number of Operational Taxonomic Units (OTUs) was ∼4 times higher than expected based on the composition of the mock sample. Moreover, the total number of reads for the 34 components of the mock sample differed by up to three orders of magnitude. However, 79 out of 86 of the unexpected OTUs were represented by <10 sequences that did not appear consistently across replicates. Our data suggest that random sampling of rare OTUs (e.g., small associated fauna such as parasites) accounted for most of variation in OTU presence-absence, whereas biases associated with indexed PCRs accounted for a larger amount of variation in relative abundance patterns. These results suggest that random sampling during sequencing leads to the low reproducibility of rare OTUs. We suggest that the strategy for handling rare OTUs should depend on the objectives of the study. Systematic removal of rare OTUs may avoid inflating diversity based on common β descriptors but will exclude positive records of taxa that are functionally important. Our results further reinforce the need for technical replicates (parallel PCR and sequencing from the same sample) in metabarcoding experimental designs. Data reproducibility should be determined empirically as it will depend upon the sequencing depth, the type of sample, the sequence analysis pipeline, and the number of replicates. Moreover, estimating relative biomasses or abundances based on read counts remains elusive at the OTU level.

  12. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  13. Bias correction in species distribution models: pooling survey and collection data for multiple species.

    PubMed

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A

    2015-04-01

    Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.

  14. Bias correction in species distribution models: pooling survey and collection data for multiple species

    PubMed Central

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A.

    2016-01-01

    Summary Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence–absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence–absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model’s performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence–absence data for a given species is scarceIf we have only presence-only data and no presence–absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species’ geographic range. PMID:27840673

  15. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  16. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  17. The role of observer bias in the North American Breeding Bird Survey

    USGS Publications Warehouse

    Faanes, C.A.; Bystrak, D.

    1981-01-01

    Ornithologists sampling breeding bird populations are subject to a number of biases in bird recognition and identification. Using Breeding Bird Survey data, these biases are examined qualitatively and quantitatively, and their effects on counts are evaluated. Differences in hearing ability and degree of expertise are the major observer biases considered. Other, more subtle influences are also discussed, including unfamiliar species, resolution, imagination, similar songs and attitude and condition of observers. In most cases, welltrained observers are comparable in ability and their differences contribute little beyond sampling error. However, just as hearing loss can affect results, so can an unprepared observer. These biases are important because they can reduce the credibility of any bird population sampling effort. Care is advised in choosing observers and in interpreting and using results when observers of variable competence are involved.

  18. The psychology of elite cycling: a systematic review.

    PubMed

    Spindler, David J; Allen, Mark S; Vella, Stewart A; Swann, Christian

    2018-09-01

    This systematic review sought to synthesise what is currently known about the psychology of elite cycling. Nine electronic databases were searched in March 2017 for studies reporting an empirical test of any psychological construct in an elite cycling sample. Fourteen studies (total n = 427) met inclusion criteria. Eight studies were coded as having high risk of bias. Themes extracted included mood, anxiety, self-confidence, pain, and cognitive function. Few studies had similar objectives meaning that in many instances findings could not be synthesised in a meaningful way. Nevertheless, there was some cross-study evidence that elite cyclists have more positive mood states (relative to normative scores), pre-race anxiety impairs performance (among male cyclists), and associative strategies are perceived as helpful for pain management. Among single studies coded as having low risk of bias, evidence suggests that implicit beliefs affect decision making performance, elite cyclists are less susceptible to mental fatigue (than non-elite cyclists), and better leadership skills relates to greater social labouring. Limitations include non-standardisation of measures, lack of follow-up data, small sample sizes, and overall poor research quality. The findings of this systematic review might be used to inform research and theory development on the psychology of elite endurance cycling.

  19. Gift card incentives and non-response bias in a survey of vaccine providers: the role of geographic and demographic factors.

    PubMed

    Van Otterloo, Joshua; Richards, Jennifer L; Seib, Katherine; Weiss, Paul; Omer, Saad B

    2011-01-01

    This study investigates the effects of non-response bias in a 2010 postal survey assessing experiences with H1N1 influenza vaccine administration among a diverse sample of providers (N = 765) in Washington state. Though we garnered a high response rate (80.9%) by using evidence-based survey design elements, including intensive follow-up and a gift card incentive from Target, non-response bias could exist if there were differences between respondents and non-respondents. We investigated differences between the two groups for seven variables: road distance to the nearest Target store, practice type, previous administration of vaccines, region, urbanicity, size of practice, and Vaccines for Children (VFC) program enrollment. We also examined the effect of non-response bias on survey estimates. Statistically significant differences between respondents and non-respondents were found for four variables: miles to the nearest Target store, type of medical practice, whether the practice routinely administered additional vaccines besides H1N1, and urbanicity. Practices were more likely to respond if they were from a small town or rural area (OR = 7.68, 95% CI = 1.44-40.88), were a non-traditional vaccine provider type (OR = 2.08, 95% CI = 1.06-4.08) or a pediatric provider type (OR = 4.03, 95% CI = 1.36-11.96), or administered additional vaccines besides H1N1 (OR = 1.80, 95% CI = 1.03-3.15). Of particular interest, for each ten mile increase in road distance from the nearest Target store, the likelihood of provider response decreased (OR = 0.73, 95% CI = 0.60-0.89). Of those variables associated with response, only small town or rural practice location was associated with a survey estimate of interest, suggesting that non-response bias had a minimal effect on survey estimates. These findings show that gift card incentives alongside survey design elements and follow-up can achieve high response rates. However, there is evidence that practices farther from the nearest place to redeem gift cards may be less likely to respond to the survey.

  20. Implications of the Observed Ultraluminous X-Ray Source Luminosity Function

    NASA Technical Reports Server (NTRS)

    Swartz, Douglas A.; Tennant, Allyn; Soria, Roberto; Yukita, Mihoko

    2012-01-01

    We present the X-ray luminosity function (XLF) of ultraluminous X-ray (ULX) sources with 0.3-10.0 keV luminosities in excess of 10(sup 39) erg/s in a complete sample of nearby galaxies. The XLF shows a break or cut-off at high luminosities that deviates from its pure power law distribution at lower luminosities. The cut-off is at roughly the Eddington luminosity for a 90-140 solar mass accretor. We examine the effects on the observed XLF of sample biases, of small-number statistics (at the high luminosity end) and of measurement uncertainties. We consider the physical implications of the shape and normalization of the XLF. The XLF is also compared and contrasted to results of other recent surveys.

  1. Meta-analysis of alcohol price and income elasticities – with corrections for publication bias

    PubMed Central

    2013-01-01

    Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547

  2. Selection within households in health surveys

    PubMed Central

    Alves, Maria Cecilia Goi Porto; Escuder, Maria Mercedes Loureiro; Claro, Rafael Moreira; da Silva, Nilza Nunes

    2014-01-01

    OBJECTIVE To compare the efficiency and accuracy of sampling designs including and excluding the sampling of individuals within sampled households in health surveys. METHODS From a population survey conducted in Baixada Santista Metropolitan Area, SP, Southeastern Brazil, lowlands between 2006 and 2007, 1,000 samples were drawn for each design and estimates for people aged 18 to 59 and 18 and over were calculated for each sample. In the first design, 40 census tracts, 12 households per sector, and one person per household were sampled. In the second, no sampling within the household was performed and 40 census sectors and 6 households for the 18 to 59-year old group and 5 or 6 for the 18 and over age group or more were sampled. Precision and bias of proportion estimates for 11 indicators were assessed in the two final sets of the 1000 selected samples with the two types of design. They were compared by means of relative measurements: coefficient of variation, bias/mean ratio, bias/standard error ratio, and relative mean square error. Comparison of costs contrasted basic cost per person, household cost, number of people, and households. RESULTS Bias was found to be negligible for both designs. A lower precision was found in the design including individuals sampling within households, and the costs were higher. CONCLUSIONS The design excluding individual sampling achieved higher levels of efficiency and accuracy and, accordingly, should be first choice for investigators. Sampling of household dwellers should be adopted when there are reasons related to the study subject that may lead to bias in individual responses if multiple dwellers answer the proposed questionnaire. PMID:24789641

  3. Sampling bias in an international internet survey of diversion programs in the criminal justice system.

    PubMed

    Hartford, Kathleen; Carey, Robert; Mendonca, James

    2007-03-01

    Despite advances in the storage and retrieval of information within health care systems, health researchers conducting surveys for evaluations still face technical barriers that may lead to sampling bias. The authors describe their experience in administering a Web-based, international survey to English-speaking countries. Identifying the sample was a multistage effort involving (a) searching for published e-mail addresses, (b) conducting Web searches for publicly funded agencies, and (c) performing literature searches, personal contacts, and extensive Internet searches for individuals. After pretesting, the survey was converted into an electronic format accessible by multiple Web browsers. Sampling bias arose from (a) system incompatibility, which did not allow potential respondents to open the survey, (b) varying institutional gate-keeping policies that "recognized" the unsolicited survey as spam, (c) culturally unique program terminology, which confused some respondents, and (d) incomplete sampling frames. Solutions are offered to the first three problems, and the authors note that sampling bias remains a crucial problem.

  4. Nonlinear vs. linear biasing in Trp-cage folding simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less

  5. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  6. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  7. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  8. Female-biased attraction of Oriental fruit fly, bactrocera dorsalis (Hendel), to a blend of host fruit volatiles from Terminalia catappa L.

    PubMed

    Siderhurst, Matthew S; Jang, Eric B

    2006-11-01

    Coupled gas chromatography-electroantennogram detection (GC-EAD) analysis of volatiles from tropical almond fruit, Terminalia catappa L., revealed 22 compounds that were detected by antennae of oriental fruit fly females, Bactrocera dorsalis (Hendel). Both solid-phase microextraction (SPME) and Porapak Q were used for sampling odors in fruit headspace, with SPME collections producing larger EAD responses from a greater number of compounds. Geranyl acetate and methyl eugenol elicited the largest EAD responses. A synthetic blend containing SPME collected, EAD stimulatory compounds showed female-biased attraction in laboratory wind tunnel bioassays, but heavily male-biased trap captures in a larger olfactometer arena. A nine-component subset of compounds eliciting relatively small EAD responses (EAD minor) and consisting of equal parts ethanol, ethyl acetate, ethyl hexanoate, hexyl acetate, linalyl acetate, ethyl nonanate, nonyl acetate, ethyl cinnamate, and (E)-beta-farnesene, attracted mainly females. This EAD minor blend was as attractive to females and much less attractive to males when compared to torula yeast in field cage experiments using glass McPhail traps. Similar results were obtained with outdoor rotating olfactometer tests in which the EAD minor blend was almost completely inactive for males.

  9. Quantity, topics, methods and findings of randomised controlled trials published by German university departments of general practice - systematic review.

    PubMed

    Heinmüller, Stefan; Schneider, Antonius; Linde, Klaus

    2016-04-23

    Academic infrastructures and networks for clinical research in primary care receive little funding in Germany. We aimed to provide an overview of the quantity, topics, methods and findings of randomised controlled trials published by German university departments of general practice. We searched Scopus (last search done in April 2015), publication lists of institutes and references of included articles. We included randomised trials published between January 2000 and December 2014 with a first or last author affiliated with a German university department of general practice or family medicine. Risk of bias was assessed with the Cochrane tool, and study findings were quantified using standardised mean differences (SMDs). Thirty-three trials met the inclusion criteria. Seventeen were cluster-randomised trials, with a majority investigating interventions aimed at improving processes compared with usual care. Sample sizes varied between 6 and 606 clusters and 168 and 7807 participants. The most frequent methodological problem was risk of selection bias due to recruitment of individuals after randomisation of clusters. Effects of interventions over usual care were mostly small (SMD <0.3). Sixteen trials randomising individual participants addressed a variety of treatment and educational interventions. Sample sizes varied between 20 and 1620 participants. The methodological quality of the trials was highly variable. Again, effects of experimental interventions over controls were mostly small. Despite limited funding, German university institutes of general practice or family medicine are increasingly performing randomised trials. Cluster-randomised trials on practice improvement are a focus, but problems with allocation concealment are frequent.

  10. Comparison of bias-corrected covariance estimators for MMRM analysis in longitudinal data with dropouts.

    PubMed

    Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori

    2017-10-01

    In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications, resulted in substantial inflation of the type 1 error rate. We recommend the use of Mancl and DeRouen's estimator in MMRM analysis if the number of subjects completing is ( n + 5) or less, where n is the number of planned visits. Otherwise, the use of Kenward and Roger's method with UN structure should be the best way.

  11. Systematic bias in genomic classification due to contaminating non-neoplastic tissue in breast tumor samples.

    PubMed

    Elloumi, Fathi; Hu, Zhiyuan; Li, Yan; Parker, Joel S; Gulley, Margaret L; Amos, Keith D; Troester, Melissa A

    2011-06-30

    Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor.

  12. A review of cognitive biases in youth depression: attention, interpretation and memory.

    PubMed

    Platt, Belinda; Waters, Allison M; Schulte-Koerne, Gerd; Engelmann, Lina; Salemink, Elske

    2017-04-01

    Depression is one of the most common mental health problems in childhood and adolescence. Although data consistently show it is associated with self-reported negative cognitive styles, less is known about the mechanisms underlying this relationship. Cognitive biases in attention, interpretation and memory represent plausible mechanisms and are known to characterise adult depression. We provide the first structured review of studies investigating the nature and causal role of cognitive biases in youth depression. Key questions are (i) do cognitive biases characterise youth depression? (ii) are cognitive biases a vulnerability factor for youth depression? and (iii) do cognitive biases play a causal role in youth depression? We find consistent evidence for positive associations between attention and interpretation biases and youth depression. Stronger biases in youth with an elevated risk of depression support cognitive-vulnerability models. Preliminary evidence from cognitive bias modification paradigms supports a causal role of attention and interpretation biases in youth depression but these paradigms require testing in clinical samples before they can be considered treatment tools. Studies of memory biases in youth samples have produced mixed findings and none have investigated the causal role of memory bias. We identify numerous areas for future research in this emerging field.

  13. Overcoming barriers to effective pain management: the use of professionally directed small group discussions.

    PubMed

    Lewis, C Preston; Corley, Donna J; Lake, Norma; Brockopp, Dorothy; Moe, Krista

    2015-04-01

    Inadequate assessment and management of pain among critical care patients can lead to ineffective care delivery and an increased length of stay. Nurses' lack of knowledge regarding appropriate assessment and treatment, as well as negative biases toward specific patient populations, can lead to poor pain control. Our aim was to evaluate the effectiveness of professionally directed small group discussions on critical care nurses' knowledge and biases related to pain management. A quasi-experiment was conducted at a 383-bed Magnet(®) redesignated hospital in the southeastern United States. Critical care nurses (N = 32) participated in the study. A modified Brockopp and Warden Pain Knowledge Questionnaire was administered before and after the small group sessions. These sessions were 45 minutes in length, consisted of two to six nurses per group, and focused on effective pain management strategies. Results indicated that mean knowledge scores differed significantly and in a positive direction after intervention [preintervention mean = 18.28, standard deviation = 2.33; postintervention mean = 22.16, standard deviation = 1.70; t(31) = -8.87, p < .001]. Post-bias scores (amount of time and energy nurses would spend attending to patients' pain) were significantly higher for 6 of 15 patient populations. The strongest bias against treating patients' pain was toward unconscious and mechanically ventilated individuals. After the implementation of professionally directed small group discussions with critical care nurses, knowledge levels related to pain management increased and biases toward specific patient populations decreased. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  14. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  15. Sol-gel synthesis of Cu-doped p-CdS nanoparticles and their analysis as p-CdS/n-ZnO thin film photodiode

    NASA Astrophysics Data System (ADS)

    Arya, Sandeep; Sharma, Asha; Singh, Bikram; Riyas, Mohammad; Bandhoria, Pankaj; Aatif, Mohammad; Gupta, Vinay

    2018-05-01

    Copper (Cu) doped p-CdS nanoparticles have been synthesized via sol-gel method. The as-synthesized nanoparticles were successfully characterized and implemented for fabrication of Glass/ITO/n-ZnO/p-CdS/Al thin film photodiode. The fabricated device is tested for small (-1 V to +1 V) bias voltage. Results verified that the junction leakage current within the dark is very small. During reverse bias condition, the maximum amount of photocurrent is obtained under illumination of 100 μW/cm2. Electrical characterizations confirmed that the external quantum efficiency (EQE), gain and responsivity of n-ZnO/p-CdS photodiode show improved photo response than conventional p-type materials for such a small bias voltage. It is therefore revealed that the Cu-doped CdS nanoparticles is an efficient p-type material for fabrication of thin film photo-devices.

  16. Magnetic braking in Solar-type close binaries

    NASA Astrophysics Data System (ADS)

    Maceroni, C.; Rucinski, S. M.

    In tidally locked binaries the angular momentum loss by magnetic braking affects the orbital period. While this effect is too small to be detected in individual systems, its signature can be seen in shape of the orbital period distribution of suitable samples. As a consequence information on the braking mechanisms can be obtained - at least in principle - from the analysis of the distributions, the main problems being the selection of a large and homogeneous sample of binaries and the appropriate treatment of the observational biases. New large databases of variable stars are becoming available as by-products of microlensing projects, which have the advantage of joining, for the first time, sample richness and homogeneity. We report the main results of the analysis of the eclipsing binaries in OGLE-I catalog, that contains several thousands variables detected in a pencil-beam search volume towards the Baade's Window. By means of an automatic filtering algorithm we extracted a sample of 74 detached, equal-mass, main-sequence binary stars with short orbital periods (i.e., in the range 0.19 < P < 8 days) and derived from the presently observed period distribution, after correction for selection effects, the expected slope of the braking law. The results suggest an AML braking law very close to the "saturated" one, with a very weak dependence on the period. However we are still far from constraining the precise value of the slope, because of the important role played by the observational bias.

  17. A Bayesian Perspective on the Reproducibility Project: Psychology

    PubMed Central

    Etz, Alexander; Vandekerckhove, Joachim

    2016-01-01

    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable. PMID:26919473

  18. A Bayesian Perspective on the Reproducibility Project: Psychology.

    PubMed

    Etz, Alexander; Vandekerckhove, Joachim

    2016-01-01

    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors-a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis-for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

  19. An assessment of change in risk perception and optimistic bias for hurricanes among Gulf Coast residents.

    PubMed

    Trumbo, Craig; Meyer, Michelle A; Marlatt, Holly; Peek, Lori; Morrissey, Bridget

    2014-06-01

    This study focuses on levels of concern for hurricanes among individuals living along the Gulf Coast during the quiescent two-year period following the exceptionally destructive 2005 hurricane season. A small study of risk perception and optimistic bias was conducted immediately following Hurricanes Katrina and Rita. Two years later, a follow-up was done in which respondents were recontacted. This provided an opportunity to examine changes, and potential causal ordering, in risk perception and optimistic bias. The analysis uses 201 panel respondents who were matched across the two mail surveys. Measures included hurricane risk perception, optimistic bias for hurricane evacuation, past hurricane experience, and a small set of demographic variables (age, sex, income, and education). Paired t-tests were used to compare scores across time. Hurricane risk perception declined and optimistic bias increased. Cross-lagged correlations were used to test the potential causal ordering between risk perception and optimistic bias, with a weak effect suggesting the former affects the latter. Additional cross-lagged analysis using structural equation modeling was used to look more closely at the components of optimistic bias (risk to self vs. risk to others). A significant and stronger potentially causal effect from risk perception to optimistic bias was found. Analysis of the experience and demographic variables' effects on risk perception and optimistic bias, and their change, provided mixed results. The lessening of risk perception and increase in optimistic bias over the period of quiescence suggest that risk communicators and emergency managers should direct attention toward reversing these trends to increase disaster preparedness. © 2013 Society for Risk Analysis.

  20. Manually controlled human balancing using visual, vestibular and proprioceptive senses involves a common, low frequency neural process

    PubMed Central

    Lakie, Martin; Loram, Ian D

    2006-01-01

    Ten subjects balanced their own body or a mechanically equivalent unstable inverted pendulum by hand, through a compliant spring linkage. Their balancing process was always characterized by repeated small reciprocating hand movements. These bias adjustments were an observable sign of intermittent alterations in neural output. On average, the adjustments occurred at intervals of ∼400 ms. To generate appropriate stabilizing bias adjustments, sensory information about body or load movement is needed. Subjects used visual, vestibular or proprioceptive sensation alone and in combination to perform the tasks. We first ask, is the time between adjustments (bias duration) sensory specific? Vision is associated with slow responses. Other senses involved with balance are known to be faster. Our second question is; does bias duration depend on sensory abundance? An appropriate bias adjustment cannot occur until unplanned motion is unambiguously perceived (a sensory threshold). The addition of more sensory data should therefore expedite action, decreasing the mean bias adjustment duration. Statistical analysis showed that (1) the mean bias adjustment duration was remarkably independent of the sensory modality and (2) the addition of one or two sensory modalities made a small, but significant, decrease in the mean bias adjustment duration. Thus, a threshold effect can alter only a very minor part of the bias duration. The bias adjustment duration in manual balancing must reflect something more than visual sensation and perceptual thresholds; our suggestion is that it is a common central motor planning process. We predict that similar processes may be identified in the control of standing. PMID:16959857

  1. Independent Pixel and Two Dimensional Estimates of LANDSAT-Derived Cloud Field Albedo

    NASA Technical Reports Server (NTRS)

    Chambers, L. H.; Wielicki, Bruce A.; Evans, K. F.

    1996-01-01

    A theoretical study has been conducted on the effects of cloud horizontal inhomogeneity on cloud albedo bias. A two-dimensional (2D) version of the Spherical Harmonic Discrete Ordinate Method (SHDOM) is used to estimate the albedo bias of the plane parallel (PP-IPA) and independent pixel (IPA-2D) approximations for a wide range of 2D cloud fields obtained from LANDSAT. They include single layer trade cumulus, open and closed cell broken stratocumulus, and solid stratocumulus boundary layer cloud fields over ocean. Findings are presented on a variety of averaging scales and are summarized as a function of cloud fraction, mean cloud optical depth, cloud aspect ratio, standard deviation of optical depth, and the gamma function parameter Y (a measure of the width of the optical depth distribution). Biases are found to be small for small cloud fraction or mean optical depth, where the cloud fields under study behave linearly. They are large (up to 0.20 for PP-IPA bias, -0.12 for IPA-2D bias) for large v. On a scene average basis PP-IPA bias can reach 0.30, while IPA-2D bias reaches its largest magnitude at -0.07. Biases due to horizontal transport (IPA-2D) are much smaller than PP-IPA biases but account for 20% RMS of the bias overall. Limitations of this work include the particular cloud field set used, assumptions of conservative scattering, constant cloud droplet size, no gas absorption or surface reflectance, and restriction to 2D radiative transport. The LANDSAT data used may also be affected by radiative smoothing.

  2. Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data

    PubMed Central

    Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2014-01-01

    The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453

  3. Effectiveness of tranexamic acid for decreasing bleeding in prostate surgery: a systematic review and meta-analysis.

    PubMed

    Mina, Sergio Hernando; Garcia-Perdomo, Herney Andres

    2018-01-01

    The objective of this study was to determine the effectiveness of tranexamic acid in decreasing bleeding in patients undergoing prostate surgery. All clinical experiments were included without language restrictions. The inclusion criteria were as follows: men over 18 years of age who underwent prostate surgery (transurethral, prostate adenectomy, and radical prostatectomy) and received tranexamic acid prior to prostate surgery as a preventive measure for perioperative hemorrhage. Prophylactic tranexamic acid vs. no intervention or placebo were compared. The primary outcomes were as follows: 1) intraoperative blood loss and 2) the need for red blood cell transfusion. A systematic search was performed in MEDLINE, EMBASE, CENTRAL and LILACS. Other sources were used to discover published and unpublished literature sources. The statistical analysis was performed in Review Manager v.5.3. Four studies were included with a total of 436 patients. Three of the four studies had small sample sizes. There was a low risk of attrition bias and reporting bias. Unclear risk of selection bias, performance bias, or detection bias was presented. A mean difference (MD) of -174.49 [95% CI (-248.43 to -100.56)] was found for perioperative blood loss (the primary outcome). At the end of the procedure, the hemoglobin concentration had a MD of -1.19 [95% CI (-4.37 to 1.99)]. Tranexamic acid is effective at preventing perioperative blood loss compared with the placebo in patients undergoing transurethral resection of the prostate (TURP). However, this treatment was not effective neither at preventing the need for transfusions nor at increasing hemoglobin values at the end of the procedure.

  4. Female-biased sex ratio, polygyny, and persistence in the endangered Southwestern Willow Flycatcher (Empidonax traillii extimus)

    USGS Publications Warehouse

    Kus, Barbara E.; Howell, Scarlett; Wood, Dustin A.

    2017-01-01

    Demographic changes in populations, such as skewed sex ratios, are of concern to conservationists, especially in small populations in which stochastic and other events can produce declines leading to extirpation. We documented a decline in one of the few remaining populations of Southwestern Willow Flycatcher (Empidonax traillii extimus) in southern California, USA, which dropped from 40 to 5 adults between 2000 and 2015. Declines were unequal between sexes (94% for males, 82% for females). Adult sex ratios were female-biased in 10 of 16 yr. The proportion of paired males that were polygynous ranged from 0% to 100%, depending on the ratio of females to males in the adult population. Some males paired with up to 5 females simultaneously. We investigated the role of nestling sex ratio in the female-biased adult sex ratio by using genetic techniques to determine sex from blood samples collected from 162 nestlings in 72 nests from 2002 to 2009. Both population-level and within-brood nestling sex ratios were female-biased, and were not influenced by nest order (first or subsequent), parental mating type (monogamous or polygynous), or year. Disproportionately more females than males were recruited into the breeding population, mirroring nestling and fledgling sex ratios. It thus appears that a skewed nestling sex ratio has contributed to a female-biased adult population, which in turn has influenced mating behavior. We propose that the capacity for polygyny, which generally occurs at low levels in Southwestern Willow Flycatchers, has allowed this population to persist through a decline that might otherwise have resulted in extinction.

  5. Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.

    PubMed

    Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel

    2015-12-01

    Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).

  6. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    USGS Publications Warehouse

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  7. Respondent-Driven Sampling: An Assessment of Current Methodology.

    PubMed

    Gile, Krista J; Handcock, Mark S

    2010-08-01

    Respondent-Driven Sampling (RDS) employs a variant of a link-tracing network sampling strategy to collect data from hard-to-reach populations. By tracing the links in the underlying social network, the process exploits the social structure to expand the sample and reduce its dependence on the initial (convenience) sample.The current estimators of population averages make strong assumptions in order to treat the data as a probability sample. We evaluate three critical sensitivities of the estimators: to bias induced by the initial sample, to uncontrollable features of respondent behavior, and to the without-replacement structure of sampling.Our analysis indicates: (1) that the convenience sample of seeds can induce bias, and the number of sample waves typically used in RDS is likely insufficient for the type of nodal mixing required to obtain the reputed asymptotic unbiasedness; (2) that preferential referral behavior by respondents leads to bias; (3) that when a substantial fraction of the target population is sampled the current estimators can have substantial bias.This paper sounds a cautionary note for the users of RDS. While current RDS methodology is powerful and clever, the favorable statistical properties claimed for the current estimates are shown to be heavily dependent on often unrealistic assumptions. We recommend ways to improve the methodology.

  8. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  9. Modification of the Sandwich Estimator in Generalized Estimating Equations with Correlated Binary Outcomes in Rare Event and Small Sample Settings

    PubMed Central

    Rogers, Paul; Stoner, Julie

    2016-01-01

    Regression models for correlated binary outcomes are commonly fit using a Generalized Estimating Equations (GEE) methodology. GEE uses the Liang and Zeger sandwich estimator to produce unbiased standard error estimators for regression coefficients in large sample settings even when the covariance structure is misspecified. The sandwich estimator performs optimally in balanced designs when the number of participants is large, and there are few repeated measurements. The sandwich estimator is not without drawbacks; its asymptotic properties do not hold in small sample settings. In these situations, the sandwich estimator is biased downwards, underestimating the variances. In this project, a modified form for the sandwich estimator is proposed to correct this deficiency. The performance of this new sandwich estimator is compared to the traditional Liang and Zeger estimator as well as alternative forms proposed by Morel, Pan and Mancl and DeRouen. The performance of each estimator was assessed with 95% coverage probabilities for the regression coefficient estimators using simulated data under various combinations of sample sizes and outcome prevalence values with an Independence (IND), Autoregressive (AR) and Compound Symmetry (CS) correlation structure. This research is motivated by investigations involving rare-event outcomes in aviation data. PMID:26998504

  10. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  11. Active Sensor for Microwave Tissue Imaging with Bias-Switched Arrays.

    PubMed

    Foroutan, Farzad; Nikolova, Natalia K

    2018-05-06

    A prototype of a bias-switched active sensor was developed and measured to establish the achievable dynamic range in a new generation of active arrays for microwave tissue imaging. The sensor integrates a printed slot antenna, a low-noise amplifier (LNA) and an active mixer in a single unit, which is sufficiently small to enable inter-sensor separation distance as small as 12 mm. The sensor’s input covers the bandwidth from 3 GHz to 7.5 GHz. Its output intermediate frequency (IF) is 30 MHz. The sensor is controlled by a simple bias-switching circuit, which switches ON and OFF the bias of the LNA and the mixer simultaneously. It was demonstrated experimentally that the dynamic range of the sensor, as determined by its ON and OFF states, is 109 dB and 118 dB at resolution bandwidths of 1 kHz and 100 Hz, respectively.

  12. Implicit and explicit weight bias in a national sample of 4,732 medical students: the medical student CHANGES study.

    PubMed

    Phelan, Sean M; Dovidio, John F; Puhl, Rebecca M; Burgess, Diana J; Nelson, David B; Yeazel, Mark W; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-04-01

    To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. A web-based survey was completed by 4,732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bias: a feeling thermometer and the anti-fat attitudes test. A majority of students exhibited implicit (74%) and explicit (67%) weight bias. Implicit weight bias scores were comparable to reported bias against racial minorities. Explicit attitudes were more negative toward obese people than toward racial minorities, gays, lesbians, and poor people. In multivariate regression models, implicit and explicit weight bias was predicted by lower BMI, male sex, and non-Black race. Either implicit or explicit bias was also predicted by age, SES, country of birth, and specialty choice. Implicit and explicit weight bias is common among 1st year medical students, and varies across student factors. Future research should assess implications of biases and test interventions to reduce their impact. Copyright © 2013 The Obesity Society.

  13. When and Why Is Religious Attendance Associated With Antigay Bias and Gay Rights Opposition? A Justification-Suppression Model Approach.

    PubMed

    Hoffarth, Mark Romeo; Hodson, Gordon; Molnar, Danielle S

    2017-04-24

    Even in relatively tolerant countries, antigay bias remains socially divisive, despite being widely viewed as violating social norms of tolerance. From a Justification-Suppression Model (JSM) framework, social norms may generally suppress antigay bias in tolerant countries, yet be "released" by religious justifications among those who resist gay rights progress. Across large, nationally representative US samples (Study 1) and international samples (Study 2, representing a total of 97 different countries), over 215,000 participants, and various indicators of antigay bias (e.g., dislike, moral condemnation, opposing gay rights), individual differences in religious attendance was uniquely associated with greater antigay bias, over and above religious fundamentalism, political ideology, and religious denomination. Moreover, in 4 of 6 multilevel models, religious attendance was associated with antigay bias in countries with greater gay rights recognition, but was unrelated to antigay bias in countries with lower gay rights recognition (Study 2). In Study 3, Google searches for a religious justification ("love the sinner hate the sin") coincided temporally with gay-rights relevant searches. In U.S. (Study 4) and Canadian (Study 5) samples, much of the association between religious attendance and antigay bias was explained by "sinner-sin" religious justification, with religious attendance not associated with antigay bias when respondents reported relatively low familiarity with this justification (Study 5). These findings suggest that social divisions on homosexuality in relatively tolerant social contexts may be in large part due to religious justifications for antigay bias (consistent with the JSM), with important implications for decreasing bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. An Approach to Unbiased Subsample Interpolation for Motion Tracking

    PubMed Central

    McCormick, Matthew M.; Varghese, Tomy

    2013-01-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder–Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique. PMID:23493609

  15. Managing Bias in Palliative Care: Professional Hazards in Goals of Care Discussions at the End of Life.

    PubMed

    Callaghan, Katharine A; Fanning, Joseph B

    2018-02-01

    In the setting of end-of-life care, biases can interfere with patient articulation of goals and hinder provision of patient-centered care. No studies have addressed clinician bias or bias management specific to goals of care discussions at the end of life. To identify and determine the prevalence of palliative care clinician biases and bias management strategies in end-of-life goals of care discussions. A semistructured interview guide with relevant domains was developed to facilitate data collection. Participants were asked directly to identify biases and bias management strategies applicable to this setting. Two researchers developed a codebook to identify themes using a 25% transcript sample through an iterative process based on grounded theory. Inter-rater reliability was evaluated using Cohen κ. It was 0.83, indicating near perfect agreement between coders. The data approach saturation. A purposive sampling of 20 palliative care clinicians in Middle Tennessee participated in interviews. The 20 clinicians interviewed identified 16 biases and 11 bias management strategies. The most frequently mentioned bias was a bias against aggressive treatment (n = 9), described as a clinician's assumption that most interventions at the end of life are not beneficial. The most frequently mentioned bias management strategy was self-recognition of bias (n = 17), described as acknowledging that bias is present. This is the first study identifying palliative care clinicians' biases and bias management strategies in end-of-life goals of care discussions.

  16. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  17. Medical School Experiences Associated with Change in Implicit Racial Bias Among 3547 Students: A Medical Student CHANGES Study Report.

    PubMed

    van Ryn, Michelle; Hardeman, Rachel; Phelan, Sean M; Burgess, Diana J; Dovidio, John F; Herrin, Jeph; Burke, Sara E; Nelson, David B; Perry, Sylvia; Yeazel, Mark; Przedworski, Julia M

    2015-12-01

    Physician implicit (unconscious, automatic) bias has been shown to contribute to racial disparities in medical care. The impact of medical education on implicit racial bias is unknown. To examine the association between change in student implicit racial bias towards African Americans and student reports on their experiences with 1) formal curricula related to disparities in health and health care, cultural competence, and/or minority health; 2) informal curricula including racial climate and role model behavior; and 3) the amount and favorability of interracial contact during school. Prospective observational study involving Web-based questionnaires administered during first (2010) and last (2014) semesters of medical school. A total of 3547 students from a stratified random sample of 49 U.S. medical schools. Change in implicit racial attitudes as assessed by the Black-White Implicit Association Test administered during the first semester and again during the last semester of medical school. In multivariable modeling, having completed the Black-White Implicit Association Test during medical school remained a statistically significant predictor of decreased implicit racial bias (-5.34, p ≤ 0.001: mixed effects regression with random intercept across schools). Students' self-assessed skills regarding providing care to African American patients had a borderline association with decreased implicit racial bias (-2.18, p = 0.056). Having heard negative comments from attending physicians or residents about African American patients (3.17, p = 0.026) and having had unfavorable vs. very favorable contact with African American physicians (18.79, p = 0.003) were statistically significant predictors of increased implicit racial bias. Medical school experiences in all three domains were independently associated with change in student implicit racial attitudes. These findings are notable given that even small differences in implicit racial attitudes have been shown to affect behavior and that implicit attitudes are developed over a long period of repeated exposure and are difficult to change.

  18. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  19. Limitations and opportunities of whole blood bilirubin measurements by GEM premier 4000®.

    PubMed

    Wang, Li; Albert, Arianne Y K; Jung, Benjamin; Hadad, Keyvan; Lyon, Martha E; Basso, Melanie

    2017-03-29

    Neonatal hyperbilirubinemia has traditionally been screened by either total serum bilirubin or transcutaneous bilirubin. Whole blood bilirubin (TwB) by the GEM Premier 4000® blood gas analyzer (GEM) is a relatively new technology and it provides fast bilirubin results with a small sample volume and can measure co-oximetry and other analytes. Our clinical study was to evaluate the reliability of TwB measured by the GEM and identify analytical and clinical factors that may contribute to possible bias. 440 consecutive healthy newborn samples that had plasma bilirubin ordered for neonatal hyperbilirubinemia screening were included. TwB was first measured using the GEM, after which the remainder of the blood was spun and plasma neonatal bilirubin was measured using the VITROS 5600® (VITROS). 62 samples (14%) were excluded from analysis due to failure in obtaining GEM results. Passing-Bablok regression suggested that the GEM results were negatively biased at low concentrations of bilirubin and positively biased at higher concentrations relative to the VITROS results (y = 1.43x-61.13). Bland-Altman plots showed an overall negative bias of the GEM bilirubin with a wide range of differences compared to VITROS. Both hemoglobin concentration and hemolysis affected the accuracy of the GEM results. Clinically, male infants had higher mean bilirubin levels, and infants delivered by caesarean section had lower hemoglobin levels. When comparing the number of results below the 40th percentile and above the 95th percentile cut-offs in the Bhutani nomogram which would trigger discharge or treatment, GEM bilirubin exhibited poor sensitivity and poor specificity in contrast to VITROS bilirubin. An imperfect correlation was observed between whole blood bilirubin measured on the GEM4000® and plasma bilirubin on the VITROS 5600®. The contributors to the observed differences between the two instruments were specimen hemolysis and the accuracy of hemoglobin measurements, the latter of which affects the calculation of plasma-equivalent bilirubin. Additionally, the lack of standardization of total bilirubin calibration particularly in newborn specimens, may also account for some of the disagreement in results.

  20. Nutrition surveillance using a small open cohort: experience from Burkina Faso.

    PubMed

    Altmann, Mathias; Fermanian, Christophe; Jiao, Boshen; Altare, Chiara; Loada, Martin; Myatt, Mark

    2016-01-01

    Nutritional surveillance remains generally weak and early warning systems are needed in areas with high burden of acute under-nutrition. In order to enhance insight into nutritional surveillance, a community-based sentinel sites approach, known as the Listening Posts (LP) Project, was piloted in Burkina Faso by Action Contre la Faim (ACF). This paper presents ACF's experience with the LP approach and investigates potential selection and observational biases. Six primary sampling units (PSUs) were selected in each livelihood zone using the centric systematic area sampling methodology. In each PSU, 22 children aged between 6 and 24 months were selected by proximity sampling. The prevalence of GAM for each month from January 2011 to December 2013 was estimated using a Bayesian normal-normal conjugate analysis followed by PROBIT estimation. To validate the LP approach in detecting changes over time, the time trends of MUAC from LP and from five cross-sectional surveys were modelled using polynomial regression and compared by using a Wald test. The differences between prevalence estimates from the two data sources were used to assess selection and observational biases. The 95 % credible interval around GAM prevalence estimates using LP approach ranged between +6.5 %/-6.0 % on a prevalence of 36.1 % and +3.5 %/-2.9 % on a prevalence of 10.8 %. LP and cross-sectional surveys time trend models were well correlated (p = 0.6337). Although LP showed a slight but significant trend for GAM to decrease over time at a rate of -0.26 %/visit, the prevalence estimates from the two data sources showed good agreement over a 3-year period. The LP methodology has proved to be valid in following trends of GAM prevalence for a period of 3 years without selection bias. However, a slight observational bias was observed, requiring a periodical reselection of the sentinel sites. This kind of surveillance project is suited to use in areas with high burden of acute under-nutrition where early warning systems are strongly needed. Advocacy is necessary to develop sustainable nutrition surveillance system and to support the use of surveillance data in guiding nutritional programs.

  1. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  2. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  3. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  4. Comparing State SAT Scores: Problems, Biases, and Corrections.

    ERIC Educational Resources Information Center

    Gohmann, Stephen F.

    1988-01-01

    One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)

  5. Estimating the occupancy of spotted owl habitat areas by sampling and adjusting for bias

    Treesearch

    David L. Azuma; James A. Baldwin; Barry R. Noon

    1990-01-01

    A basic sampling scheme is proposed to estimate the proportion of sampled units (Spotted Owl Habitat Areas (SOHAs) or randomly sampled 1000-acre polygon areas (RSAs)) occupied by spotted owl pairs. A bias adjustment for the possibility of missing a pair given its presence on a SOHA or RSA is suggested. The sampling scheme is based on a fixed number of visits to a...

  6. Quality of volatile organic compound data from groundwater and surface water for the National Water-Quality Assessment Program, October 1996–December 2008

    USGS Publications Warehouse

    Bender, David A.; Zogorski, John S.; Mueller, David K.; Rose, Donna L.; Martin, Jeffrey D.; Brenner, Cassandra K.

    2011-01-01

    This report describes the quality of volatile organic compound (VOC) data collected from October 1996 to December 2008 from groundwater and surface-water sites for the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program. The VOC data described were collected for three NAWQA site types: (1) domestic and public-supply wells, (2) monitoring wells, and (3) surface-water sites. Contamination bias, based on the 90-percent upper confidence limit (UCL) for the 90th percentile of concentrations in field blanks, was determined for VOC samples from the three site types. A way to express this bias is that there is 90-percent confidence that this amount of contamination would be exceeded in no more than 10 percent of all samples (including environmental samples) that were collected, processed, shipped, and analyzed in the same manner as the blank samples. This report also describes how important native water rinsing may be in decreasing carryover contamination, which could be affecting field blanks. The VOCs can be classified into four contamination categories on the basis of the 90-percent upper confidence limit (90-percent UCL) concentration distribution in field blanks. Contamination category 1 includes compounds that were not detected in any field blanks. Contamination category 2 includes VOCs that have a 90-percent UCL concentration distribution in field blanks that is about an order of magnitude lower than the concentration distribution of the environmental samples. Contamination category 3 includes VOCs that have a 90-percent UCL concentration distribution in field blanks that is within an order of magnitude of the distribution in environmental samples. Contamination category 4 includes VOCs that have a 90-percent UCL concentration distribution in field blanks that is at least an order of magnitude larger than the concentration distribution of the environmental samples. Fifty-four of the 87 VOCs analyzed in samples from domestic and public-supply wells were not detected in field blanks (contamination category 1), and 33 VOC were detected in field blanks. Ten of the 33 VOCs had a 90-percent UCL concentration distribution in field blanks that was at least an order of magnitude lower than the concentration distribution in environmental samples (contamination category 2). These 10 VOCs may have had some contamination bias associated with the environmental samples, but the potential contamination bias was negligible in comparison to the environmental data; therefore, the field blanks were assumed to be representative of the sources of contamination bias affecting the environmental samples for these 10 VOCs. Seven VOCs had a 90-percent UCL concentration distribution of the field blanks that was within an order of magnitude of the concentration distribution of the environmental samples (contamination category 3). Sixteen VOCs had a 90-percent UCL concentration distribution in the field blanks that was at least an order of magnitude greater than the concentration distribution of the environmental samples (contamination category 4). Field blanks for these 16 VOCs appear to be nonrepresentative of the sources of contamination bias affecting the environmental samples because of the larger concentration distributions (and sometimes higher frequency of detection) in field blanks than in environmental samples. Forty-three of the 87 VOCs analyzed in samples from monitoring wells were not detected in field blanks (contamination category 1), and 44 VOCs were detected in field blanks. Eight of the 44 VOCs had a 90-percent UCL concentration distribution in field blanks that was at least an order of magnitude lower than concentrations in environmental samples (contamination category 2). These eight VOCs may have had some contamination bias associated with the environmental samples, but the potential contamination bias was negligible in comparison to the environmental data; therefore, the field blanks were assumed to be representative. Seven VOCs had a 90-percent UCL concentration distribution in field blanks that was of the same order of magnitude as the concentration distribution of the environmental samples (contamination category 3). Twenty-nine VOCs had a 90-percent UCL concentration distribution in the field blanks that was an order of magnitude greater than the distribution of the environmental samples (contamination category 4). Field blanks for these 29 VOCs appear to be nonrepresentative of the sources of contamination bias to the environmental samples. Fifty-four of the 87 VOCs analyzed in surface-water samples were not detected in field blanks (category 1), and 33 VOC were detected in field blanks. Sixteen of the 33 VOCs had a 90-percent UCL concentration distribution in field blanks that was at least an order of magnitude lower than the concentration distribution in environmental samples (contamination category 2). These 16 VOCs may have had some contamination bias associated with the environmental samples, but the potential contamination bias was negligible in comparison to the environmental data; therefore, the field blanks were assumed to be representative. Ten VOCs had a 90-percent UCL concentration distribution in field blanks that was similar to the concentration distribution of environmental samples (contamination category 3). Seven VOCs had a 90-percent UCL concentration distribution in the field blanks that was greater than the concentration distribution in environmental samples (contamination category 4). Field-blank samples for these seven VOCs appear to be nonrepresentative of the sources of contamination bias to the environmental samples. The relation between the detection of a compound in field blanks and the detection in subsequent environmental samples appears to be minimal. The median minimum percent effectiveness of native water rinsing is about 79 percent for the 19 VOCs detected in more than 5 percent of field blanks from all three site types. The minimum percent effectiveness of native water rinsing (10 percent) was for toluene in surface-water samples, likely because of the large detection frequency of toluene in surface-water samples (about 79 percent) and in the associated field-blank samples (46.5 percent). The VOCs that were not detected in field blanks (contamination category 1) from the three site types can be considered free of contamination bias, and various interpretations for environmental samples, such as VOC detection frequency at multiple assessment levels and comparisons of concentrations to benchmarks, are not limited for these VOCs. A censoring level for making comparisons at different assessment levels among environmental samples could be applied to concentrations of 9 VOCs in samples from domestic and public-supply wells, 16 VOCs in samples from monitoring wells, and 9 VOCs in surface-water samples to account for potential low-level contamination bias associated with these selected VOCs. Bracketing the potential contamination by comparing the detection and concentration statistics with no censoring applied to the potential for contamination bias on the basis of the 90-percent UCL for the 90th-percentile concentrations in field blanks may be useful when comparisons to benchmarks are done in a study. The VOCs that were not detected in field blanks (contamination category 1) from the three site types can be considered free of contamination bias, and various interpretations for environmental samples, such as VOC detection frequency at multiple assessment levels and comparisons of concentrations to benchmarks, are not limited for these VOCs. A censoring level for making comparisons at different assessment levels among environmental samples could be applied to concentrations of 9 VOCs in samples from domestic and public-supply wells, 16 VOCs in samples from monitoring wells, and 9 VOCs in surface-water samples to account for potential low-level contamination bias associated with these selected VOCs. Bracketing the potential contamination by comparing the detection and concentration statistics with no censoring applied to the potential for contamination bias on the basis of the 90-percent UCL for the 90th-percentile concentrations in field blanks may be useful when comparisons to benchmarks are done in a study.

  7. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Sampling through time and phylodynamic inference with coalescent and birth–death models

    PubMed Central

    Volz, Erik M.; Frost, Simon D. W.

    2014-01-01

    Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173

  9. Characteristics of bias-based harassment incidents reported by a national sample of U.S. adolescents.

    PubMed

    Jones, Lisa M; Mitchell, Kimberly J; Turner, Heather A; Ybarra, Michele L

    2018-06-01

    Using a national sample of youth from the U.S., this paper examines incidents of bias-based harassment by peers that include language about victims' perceived sexual orientation, race/ethnicity, religion, weight or height, or intelligence. Telephone interviews were conducted with youth who were 10-20 years old (n = 791). One in six youth (17%) reported at least one experience with bias-based harassment in the past year. Bias language was a part of over half (52%) of all harassment incidents experienced by youth. Perpetrators of bias-based harassment were similar demographically to perpetrators of non-biased harassment. However, bias-based incidents were more likely to involve multiple perpetrators, longer timeframes and multiple harassment episodes. Even controlling for these related characteristics, the use of bias language in incidents of peer harassment resulted in significantly greater odds that youth felt sad as a result of the victimization, skipped school, avoided school activities, and lost friends, compared to non-biased harassment incidents. Copyright © 2018 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  10. BIASES IN CASTNET FILTER PACK RESULTS ASSOCIATED WITH SAMPLING PROTOCOL

    EPA Science Inventory

    In the current study, single filter weekly (w) results are compared with weekly results aggregated from day and night (dn) weekly samples. Comparisons of the two sampling protocols for all major constituents (SO42-, NO3-, NH4+, HNO3, and SO2) show median bias (MB) of < 5 nmol m-3...

  11. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  12. A descriptive study of sexual homicide in Canada: implications for police investigation.

    PubMed

    Beauregard, Eric; Martineau, Melissa

    2013-12-01

    Few empirical studies have been conducted that examine the phenomenon of sexual homicide, and among these studies, many have been limited by small sample size. Although interesting and informative, these studies may not be representative of the greater phenomenon of sexual murder and may be subject to sampling bias that could have significant effects on results. The current study aims to provide a descriptive analysis of the largest sample of sexual homicide cases across Canada in the past 62 years. In doing so, the study aims to examine offender and victim characteristics, victim targeting and access, and modus operandi. Findings show that cases of sexual homicide and sexual murderers included in the current study differ in many aspects from the portrait of the sexual murderer and his or her crime depicted in previous studies. The authors' results may prove useful to the police officers responsible for the investigation of these crimes.

  13. More Reasons to be Straightforward: Findings and Norms for Two Scales Relevant to Social Anxiety

    PubMed Central

    Rodebaugh, Thomas L.; Heimberg, Richard G.; Brown, Patrick J.; Fernandez, Katya C.; Blanco, Carlos; Schneier, Franklin R.; Liebowitz, Michael R.

    2011-01-01

    The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. PMID:21388781

  14. Photometric redshift requirements for lens galaxies in galaxy-galaxy lensing analyses

    NASA Astrophysics Data System (ADS)

    Nakajima, R.; Mandelbaum, R.; Seljak, U.; Cohn, J. D.; Reyes, R.; Cool, R.

    2012-03-01

    Weak gravitational lensing is a valuable probe of galaxy formation and cosmology. Here we quantify the effects of using photometric redshifts (photo-z) in galaxy-galaxy lensing, for both sources and lenses, both for the immediate goal of using galaxies with photo-z as lenses in the Sloan Digital Sky Survey (SDSS) and as a demonstration of methodology for large, upcoming weak lensing surveys that will by necessity be dominated by lens samples with photo-z. We calculate the bias in the lensing mass calibration as well as consequences for absolute magnitude (i.e. k-corrections) and stellar mass estimates for a large sample of SDSS Data Release 8 (DR8) galaxies. The redshifts are obtained with the template-based photo-z code ZEBRA on the SDSS DR8 ugriz photometry. We assemble and characterize the calibration samples (˜9000 spectroscopic redshifts from four surveys) to obtain photometric redshift errors and lensing biases corresponding to our full SDSS DR8 lens and source catalogues. Our tests of the calibration sample also highlight the impact of observing conditions in the imaging survey when the spectroscopic calibration covers a small fraction of its footprint; atypical imaging conditions in calibration fields can lead to incorrect conclusions regarding the photo-z of the full survey. For the SDSS DR8 catalogue, we find σΔz/(1+z)= 0.096 and 0.113 for the lens and source catalogues, with flux limits of r= 21 and 21.8, respectively. The photo-z bias and scatter is a function of photo-z and template types, which we exploit to apply photo-z quality cuts. By using photo-z rather than spectroscopy for lenses, dim blue galaxies and L* galaxies up to z˜ 0.4 can be used as lenses, thus expanding into unexplored areas of parameter space. We also explore the systematic uncertainty in the lensing signal calibration when using source photo-z, and both lens and source photo-z; given the size of existing training samples, we can constrain the lensing signal calibration (and therefore the normalization of the surface mass density) to within 2 and 4 per cent, respectively.

  15. Easy and accurate reconstruction of whole HIV genomes from short-read sequence data with shiver

    PubMed Central

    Blanquart, François; Golubchik, Tanya; Gall, Astrid; Bakker, Margreet; Bezemer, Daniela; Croucher, Nicholas J; Hall, Matthew; Hillebregt, Mariska; Ratmann, Oliver; Albert, Jan; Bannert, Norbert; Fellay, Jacques; Fransen, Katrien; Gourlay, Annabelle; Grabowski, M Kate; Gunsenheimer-Bartmeyer, Barbara; Günthard, Huldrych F; Kivelä, Pia; Kouyos, Roger; Laeyendecker, Oliver; Liitsola, Kirsi; Meyer, Laurence; Porter, Kholoud; Ristola, Matti; van Sighem, Ard; Cornelissen, Marion; Kellam, Paul; Reiss, Peter

    2018-01-01

    Abstract Studying the evolution of viruses and their molecular epidemiology relies on accurate viral sequence data, so that small differences between similar viruses can be meaningfully interpreted. Despite its higher throughput and more detailed minority variant data, next-generation sequencing has yet to be widely adopted for HIV. The difficulty of accurately reconstructing the consensus sequence of a quasispecies from reads (short fragments of DNA) in the presence of large between- and within-host diversity, including frequent indels, may have presented a barrier. In particular, mapping (aligning) reads to a reference sequence leads to biased loss of information; this bias can distort epidemiological and evolutionary conclusions. De novo assembly avoids this bias by aligning the reads to themselves, producing a set of sequences called contigs. However contigs provide only a partial summary of the reads, misassembly may result in their having an incorrect structure, and no information is available at parts of the genome where contigs could not be assembled. To address these problems we developed the tool shiver to pre-process reads for quality and contamination, then map them to a reference tailored to the sample using corrected contigs supplemented with the user’s choice of existing reference sequences. Run with two commands per sample, it can easily be used for large heterogeneous data sets. We used shiver to reconstruct the consensus sequence and minority variant information from paired-end short-read whole-genome data produced with the Illumina platform, for sixty-five existing publicly available samples and fifty new samples. We show the systematic superiority of mapping to shiver’s constructed reference compared with mapping the same reads to the closest of 3,249 real references: median values of 13 bases called differently and more accurately, 0 bases called differently and less accurately, and 205 bases of missing sequence recovered. We also successfully applied shiver to whole-genome samples of Hepatitis C Virus and Respiratory Syncytial Virus. shiver is publicly available from https://github.com/ChrisHIV/shiver. PMID:29876136

  16. Biases in Total Precipitable Water Vapor Climatologies from Atmospheric Infrared Sounder and Advanced Microwave Scanning Radiometer

    NASA Technical Reports Server (NTRS)

    Fetzer, Eric J.; Lambrigtsen, Bjorn H.; Eldering, Annmarie; Aumann, Hartmut H.; Chahine, Moustafa T.

    2006-01-01

    We examine differences in total precipitable water vapor (PWV) from the Atmospheric Infrared Sounder (AIRS) and the Advanced Microwave Scanning Radiometer (AMSR-E) experiments sharing the Aqua spacecraft platform. Both systems provide estimates of PWV over water surfaces. We compare AIRS and AMSR-E PWV to constrain AIRS retrieval uncertainties as functions of AIRS retrieved infrared cloud fraction. PWV differences between the two instruments vary only weakly with infrared cloud fraction up to about 70%. Maps of AIRS-AMSR-E PWV differences vary with location and season. Observational biases, when both instruments observe identical scenes, are generally less than 5%. Exceptions are in cold air outbreaks where AIRS is biased moist by 10-20% or 10-60% (depending on retrieval processing) and at high latitudes in winter where AIRS is dry by 5-10%. Sampling biases, from different sampling characteristics of AIRS and AMSR-E, vary in sign and magnitude. AIRS sampling is dry by up to 30% in most high-latitude regions but moist by 5-15% in subtropical stratus cloud belts. Over the northwest Pacific, AIRS samples conditions more moist than AMSR-E by a much as 60%. We hypothesize that both wet and dry sampling biases are due to the effects of clouds on the AIRS retrieval methodology. The sign and magnitude of these biases depend upon the types of cloud present and on the relationship between clouds and PWV. These results for PWV imply that climatologies of height-resolved water vapor from AIRS must take into consideration local meteorological processes affecting AIRS sampling.

  17. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  18. Subtle In-Scanner Motion Biases Automated Measurement of Brain Anatomy From In Vivo MRI

    PubMed Central

    Alexander-Bloch, Aaron; Clasen, Liv; Stockman, Michael; Ronan, Lisa; Lalonde, Francois; Giedd, Jay; Raznahan, Armin

    2016-01-01

    While the potential for small amounts of motion in functional magnetic resonance imaging (fMRI) scans to bias the results of functional neuroimaging studies is well appreciated, the impact of in-scanner motion on morphological analysis of structural MRI is relatively under-studied. Even among “good quality” structural scans, there may be systematic effects of motion on measures of brain morphometry. In the present study, the subjects’ tendency to move during fMRI scans, acquired in the same scanning sessions as their structural scans, yielded a reliable, continuous estimate of in-scanner motion. Using this approach within a sample of 127 children, adolescents, and young adults, significant relationships were found between this measure and estimates of cortical gray matter volume and mean curvature, as well as trend-level relationships with cortical thickness. Specifically, cortical volume and thickness decreased with greater motion, and mean curvature increased. These effects of subtle motion were anatomically heterogeneous, were present across different automated imaging pipelines, showed convergent validity with effects of frank motion assessed in a separate sample of 274 scans, and could be demonstrated in both pediatric and adult populations. Thus, using different motion assays in two large non-overlapping sets of structural MRI scans, convergent evidence showed that in-scanner motion—even at levels which do not manifest in visible motion artifact—can lead to systematic and regionally specific biases in anatomical estimation. These findings have special relevance to structural neuroimaging in developmental and clinical datasets, and inform ongoing efforts to optimize neuroanatomical analysis of existing and future structural MRI datasets in non-sedated humans. PMID:27004471

  19. Biases in Time-Averaged Field and Paleosecular Variation Studies

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Constable, C.

    2009-12-01

    Challenges to constructing time-averaged field (TAF) and paleosecular variation (PSV) models of Earth’s magnetic field over million year time scales are the uneven geographical and temporal distribution of paleomagnetic data and the absence of full vector records of the magnetic field variability at any given site. Recent improvements in paleomagnetic data sets now allow regional assessment of the biases introduced by irregular temporal sampling and the absence of full vector information. We investigate these effects over the past few Myr for regions with large paleomagnetic data sets, where the TAF and/or PSV have been of previous interest (e.g., significant departures of the TAF from the field predicted by a geocentric axial dipole). We calculate the effects of excluding paleointensity data from TAF calculations, and find these to be small. For example, at Hawaii, we find that for the past 50 ka, estimates of the TAF direction are minimally affected if only paleodirectional data versus the full paleofield vector are used. We use resampling techniques to investigate biases incurred by the uneven temporal distribution. Key to the latter issue is temporal information on a site-by-site basis. At Hawaii, resampling of the paleodirectional data onto a uniform temporal distribution, assuming no error in the site ages, reduces the magnitude of the inclination anomaly for the Brunhes, Gauss and Matuyama epochs. However inclusion of age errors in the sampling procedure leads to TAF estimates that are close to those reported for the original data sets. We discuss the implications of our results for global field models.

  20. Investigating the (cost-) effectiveness of attention bias modification (ABM) for outpatients with major depressive disorder (MDD): a randomized controlled trial protocol.

    PubMed

    Ferrari, Gina R A; Becker, Eni S; Smit, Filip; Rinck, Mike; Spijker, Jan

    2016-11-03

    Despite the range of available, evidence-based treatment options for Major Depressive Disorder (MDD), the rather low response and remission rates suggest that treatment is not optimal, yet. Computerized attention bias modification (ABM) trainings may have the potential to be provided as cost-effective intervention as adjunct to usual care (UC), by speeding up recovery and bringing more patients into remission. Research suggests, that a selective attention for negative information contributes to development and maintenance of depression and that reducing this negative bias might be of therapeutic value. Previous ABM studies in depression, however, have been limited by small sample sizes, lack of long-term follow-up measures or focus on sub-clinical samples. This study aims at evaluating the long-term (cost-) effectiveness of internet-based ABM, as add-on treatment to UC in adult outpatients with MDD, in a specialized mental health care setting. This study presents a double-blind randomized controlled trial in two parallel groups with follow-ups at 1, 6, and 12 months, combined with an economic evaluation. One hundred twenty six patients, diagnosed with MDD, who are registered for specialized outpatient services at a mental health care organization in the Netherlands, are randomized into either a positive training (towards positive and away from negative stimuli) or a sham training, as control condition (continuous attentional bias assessment). Patients complete eight training sessions (seven at home) during a period of two weeks (four weekly sessions). Primary outcome measures are change in attentional bias (pre- to post-test), mood response to stress (at post-test) and long-term effects on depressive symptoms (up to 1-year follow-up). Secondary outcome measures include rumination, resilience, positive and negative affect, and transfer to other cognitive measures (i.e., attentional bias for verbal stimuli, cognitive control, positive mental imagery), as well as quality of life and costs. This is the first study investigating the long-term effects of ABM in adult outpatients with MDD, alongside an economic evaluation. Next to exploring the mechanism underlying ABM effects, this study provides first insight into the effects of combining ABM and UC and the potential implementation of ABM in clinical practice. Trialregister.nl, NTR5285 . Registered 20 July 2015.

  1. Sampling of temporal networks: Methods and biases

    NASA Astrophysics Data System (ADS)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  2. Assessment of cognitive bias in decision-making and leadership styles among critical care nurses: a mixed methods study.

    PubMed

    Lean Keng, Soon; AlQudah, Hani Nawaf Ibrahim

    2017-02-01

    To raise awareness of critical care nurses' cognitive bias in decision-making, its relationship with leadership styles and its impact on care delivery. The relationship between critical care nurses' decision-making and leadership styles in hospitals has been widely studied, but the influence of cognitive bias on decision-making and leadership styles in critical care environments remains poorly understood, particularly in Jordan. Two-phase mixed methods sequential explanatory design and grounded theory. critical care unit, Prince Hamza Hospital, Jordan. Participant sampling: convenience sampling Phase 1 (quantitative, n = 96), purposive sampling Phase 2 (qualitative, n = 20). Pilot tested quantitative survey of 96 critical care nurses in 2012. Qualitative in-depth interviews, informed by quantitative results, with 20 critical care nurses in 2013. Descriptive and simple linear regression quantitative data analyses. Thematic (constant comparative) qualitative data analysis. Quantitative - correlations found between rationality and cognitive bias, rationality and task-oriented leadership styles, cognitive bias and democratic communication styles and cognitive bias and task-oriented leadership styles. Qualitative - 'being competent', 'organizational structures', 'feeling self-confident' and 'being supported' in the work environment identified as key factors influencing critical care nurses' cognitive bias in decision-making and leadership styles. Two-way impact (strengthening and weakening) of cognitive bias in decision-making and leadership styles on critical care nurses' practice performance. There is a need to heighten critical care nurses' consciousness of cognitive bias in decision-making and leadership styles and its impact and to develop organization-level strategies to increase non-biased decision-making. © 2016 John Wiley & Sons Ltd.

  3. Randomized controlled trial of attention bias modification in a racially diverse, socially anxious, alcohol dependent sample.

    PubMed

    Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P

    2016-12-01

    Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Randomized Controlled Trial of Attention Bias Modification in a Racially Diverse, Socially Anxious, Alcohol Dependent Sample

    PubMed Central

    Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.

    2016-01-01

    Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918

  5. A “Scientific Diversity” Intervention to Reduce Gender Bias in a Sample of Life Scientists

    PubMed Central

    Moss-Racusin, Corinne A.; van der Toorn, Jojanneke; Dovidio, John F.; Brescoll, Victoria L.; Graham, Mark J.; Handelsman, Jo

    2016-01-01

    Mounting experimental evidence suggests that subtle gender biases favoring men contribute to the underrepresentation of women in science, technology, engineering, and mathematics (STEM), including many subfields of the life sciences. However, there are relatively few evaluations of diversity interventions designed to reduce gender biases within the STEM community. Because gender biases distort the meritocratic evaluation and advancement of students, interventions targeting instructors’ biases are particularly needed. We evaluated one such intervention, a workshop called “Scientific Diversity” that was consistent with an established framework guiding the development of diversity interventions designed to reduce biases and was administered to a sample of life science instructors (N = 126) at several sessions of the National Academies Summer Institute for Undergraduate Education held nationwide. Evidence emerged indicating the efficacy of the “Scientific Diversity” workshop, such that participants were more aware of gender bias, expressed less gender bias, and were more willing to engage in actions to reduce gender bias 2 weeks after participating in the intervention compared with 2 weeks before the intervention. Implications for diversity interventions aimed at reducing gender bias and broadening the participation of women in the life sciences are discussed. PMID:27496360

  6. Phylogenetic versus functional signals in the evolution of form-function relationships in terrestrial vision.

    PubMed

    Motani, Ryosuke; Schmitz, Lars

    2011-08-01

    Phylogeny is deeply pertinent to evolutionary studies. Traits that perform a body function are expected to be strongly influenced by physical "requirements" of the function. We investigated if such traits exhibit phylogenetic signals, and, if so, how phylogenetic noises bias quantification of form-function relationships. A form-function system that is strongly influenced by physics, namely the relationship between eye morphology and visual optics in amniotes, was used. We quantified the correlation between form (i.e., eye morphology) and function (i.e., ocular optics) while varying the level of phylogenetic bias removal through adjusting Pagel's λ. Ocular soft-tissue dimensions exhibited the highest correlation with ocular optics when 1% of phylogenetic bias expected from Brownian motion was removed (i.e., λ= 0.01); the value for hard-tissue data were 8%. A small degree of phylogenetic bias therefore exists in morphology despite of the stringent functional constraints. We also devised a phylogenetically informed discriminant analysis and recorded the effects of phylogenetic bias on this method using the same data. Use of proper λ values during phylogenetic bias removal improved misidentification rates in resulting classifications when prior probabilities were assumed to be equal. Even a small degree of phylogenetic bias affected the classification resulting from phylogenetically informed discriminant analysis. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  7. Forest inventory and stratified estimation: a cautionary note

    Treesearch

    John Coulston

    2008-01-01

    The Forest Inventory and Analysis (FIA) Program uses stratified estimation techniques to produce estimates of forest attributes. Stratification must be unbiased and stratification procedures should be examined to identify any potential bias. This note explains simple techniques for identifying potential bias, discriminating between sample bias and stratification bias,...

  8. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  9. SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES

    PubMed Central

    Li, Gang; Wu, Tong Tong

    2011-01-01

    In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467

  10. The effectiveness of interventions in workplace health promotion as to maintain the working capacity of health care personal.

    PubMed

    Buchberger, Barbara; Heymann, Romy; Huppertz, Hendrik; Friepörtner, Katharina; Pomorin, Natalie; Wasem, Jürgen

    2011-01-01

    The increasing proportion of elderly people with respective care requirements and within the total population stands against aging personnel and staff reduction in the field of health care where employees are exposed to high load factors. Health promotion interventions may be a possibility to improve work situations and behavior. A systematic literature search is conducted in 32 databases limited to English and German publications since 1990. Moreover, internet-searches are performed and the reference lists of identified articles are scanned. The selection of literature was done by two reviewers independently according to inclusion and exclusion criteria. Data extraction and tables of evidence are verified by a second expert just like the assessment of risk of bias by means of the Cochrane Collaboration's tool. We identified eleven intervention studies and two systematic reviews. There were three randomized controlled trials (RCT) and one controlled trial without randomization (CCT) on the improvement of physical health, four RCT and two CCT on the improvement of psychological health and one RCT on both. Study duration ranged from four weeks to two years and the number of participants included from 20 to 345, with a median of 56. Interventions and populations were predominantly heterogeneous. In three studies intervention for the improvement of physical health resulted in less complaints and increased strength and flexibility with statistically significant differences between groups. Regarding psychological health interventions lead to significantly decreased intake of analgesics, better stress management, coping with workload, communication skills and advanced training. Taking into consideration the small to very small sample sizes, other methodological flaws like a high potential of bias and poor quality of reporting the validity of the results has to be considered as limited. Due to the heterogeneity of health interventions, study populations with differing job specializations and different lengths of study durations and follow-up periods, the comparison of results would not make sense. Further research is necessary with larger sample sizes, with a sufficient study duration and follow-up, with a lower risk of bias, by considering of relevant quality criteria and with better reporting in publications.

  11. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.

  12. Influence of growth conditions on exchange bias of NiMn-based spin valves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wienecke, Anja; Kruppe, Rahel; Rissing, Lutz

    2015-05-07

    As shown in previous investigations, a correlation between a NiMn-based spin valve's thermal stability and its inherent exchange bias exists, even if the blocking temperature of the antiferromagnet is clearly above the heating temperature and the reason for thermal degradation is mainly diffusion and not the loss of exchange bias. Samples with high exchange bias are thermally more stable than samples with low exchange bias. Those structures promoting a high exchange bias are seemingly the same suppressing thermally induced diffusion processes (A. Wienecke and L. Rissing, “Relationship between thermal stability and layer-stack/structure of NiMn-based GMR systems,” in IEEE Transaction onmore » Magnetic Conference (EMSA 2014)). Many investigations were carried out on the influence of the sputtering parameters as well as the layer thickness on the magnetoresistive effect. The influence of these parameters on the exchange bias and the sample's thermal stability, respectively, was hardly taken into account. The investigation described here concentrates on the last named issue. The focus lies on the influence of the sputtering parameters and layer thickness of the “starting layers” in the stack and the layers forming the (synthetic) antiferromagnet. This paper includes a guideline for the evaluated sputtering conditions and layer thicknesses to realize a high exchange bias and presumably good thermal stability for NiMn-based spin valves with a synthetic antiferromagnet.« less

  13. Methodological approaches in analysing observational data: A practical example on how to address clustering and selection bias.

    PubMed

    Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael

    2017-11-01

    Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. The second Southern African Bird Atlas Project: Causes and consequences of geographical sampling bias.

    PubMed

    Hugo, Sanet; Altwegg, Res

    2017-09-01

    Using the Southern African Bird Atlas Project (SABAP2) as a case study, we examine the possible determinants of spatial bias in volunteer sampling effort and how well such biased data represent environmental gradients across the area covered by the atlas. For each province in South Africa, we used generalized linear mixed models to determine the combination of variables that explain spatial variation in sampling effort (number of visits per 5' × 5' grid cell, or "pentad"). The explanatory variables were distance to major road and exceptional birding locations or "sampling hubs," percentage cover of protected, urban, and cultivated area, and the climate variables mean annual precipitation, winter temperatures, and summer temperatures. Further, we used the climate variables and plant biomes to define subsets of pentads representing environmental zones across South Africa, Lesotho, and Swaziland. For each environmental zone, we quantified sampling intensity, and we assessed sampling completeness with species accumulation curves fitted to the asymptotic Lomolino model. Sampling effort was highest close to sampling hubs, major roads, urban areas, and protected areas. Cultivated area and the climate variables were less important. Further, environmental zones were not evenly represented by current data and the zones varied in the amount of sampling required representing the species that are present. SABAP2 volunteers' preferences in birding locations cause spatial bias in the dataset that should be taken into account when analyzing these data. Large parts of South Africa remain underrepresented, which may restrict the kind of ecological questions that may be addressed. However, sampling bias may be improved by directing volunteers toward undersampled regions while taking into account volunteer preferences.

  15. Structural nested mean models for assessing time-varying effect moderation.

    PubMed

    Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A

    2010-03-01

    This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.

  16. Chewing Lice of Swan Geese (Anser cygnoides): New Host-Parasite Associations.

    PubMed

    Choi, Chang-Yong; Takekawa, John Y; Prosser, Diann J; Smith, Lacy M; Ely, Craig R; Fox, Anthony D; Cao, Lei; Wang, Xin; Batbayar, Nyambayar; Natsagdorj, Tseveenmayadag; Xiao, Xiangming

    2016-10-01

    Chewing lice (Phthiraptera) that parasitize the globally threatened swan goose Anser cygnoides have been long recognized since the early 19th century, but those records were probably biased towards sampling of captive or domestic geese due to the small population size and limited distribution of its wild hosts. To better understand the lice species parasitizing swan geese that are endemic to East Asia, we collected chewing lice from 14 wild geese caught at 3 lakes in northeastern Mongolia. The lice were morphologically identified as 16 Trinoton anserinum (Fabricius, 1805), 11 Ornithobius domesticus Arnold, 2005, and 1 Anaticola anseris (Linnaeus, 1758). These species are known from other geese and swans, but all of them were new to the swan goose. This result also indicates no overlap in lice species between older records and our findings from wild birds. Thus, ectoparasites collected from domestic or captive animals may provide biased information on the occurrence, prevalence, host selection, and host-ectoparasite interactions from those on wild hosts.

  17. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  18. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  19. Vector network analyzer ferromagnetic resonance spectrometer with field differential detection

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Tsunegi, S.; Kubota, H.; Yuasa, S.

    2018-05-01

    This work presents a vector network analyzer ferromagnetic resonance (VNA-FMR) spectrometer with field differential detection. This technique differentiates the S-parameter by applying a small binary modulation field in addition to the DC bias field to the sample. By setting the modulation frequency sufficiently high, slow sensitivity fluctuations of the VNA, i.e., low-frequency components of the trace noise, which limit the signal-to-noise ratio of the conventional VNA-FMR spectrometer, can be effectively removed, resulting in a very clean FMR signal. This paper presents the details of the hardware implementation and measurement sequence as well as the data processing and analysis algorithms tailored for the FMR spectrum obtained with this technique. Because the VNA measures a complex S-parameter, it is possible to estimate the Gilbert damping parameter from the slope of the phase variation of the S-parameter with respect to the bias field. We show that this algorithm is more robust against noise than the conventional algorithm based on the linewidth.

  20. Chewing lice of swan geese (Anser cygnoides): New host-parasite associations

    USGS Publications Warehouse

    Choi, Chang-Yong; Takekawa, John Y.; Prosser, Diann J.; Smith, Lacy M.; Ely, Craig R.; Fox, Anthony D.; Cao, Lei; Wang, Xin; Batbayar, Nyambayar; Natsagdorj, Tseveenmayadag; Xiao, Xiangming

    2016-01-01

    Chewing lice (Phthiraptera) that parasitize the globally threatened swan goose Anser cygnoides have been long recognized since the early 19th century, but those records were probably biased towards sampling of captive or domestic geese due to the small population size and limited distribution of its wild hosts. To better understand the lice species parasitizing swan geese that are endemic to East Asia, we collected chewing lice from 14 wild geese caught at 3 lakes in northeastern Mongolia. The lice were morphologically identified as 16 Trinoton anserinum (Fabricius, 1805), 11 Ornithobius domesticus Arnold, 2005, and 1 Anaticola anseris (Linnaeus, 1758). These species are known from other geese and swans, but all of them were new to the swan goose. This result also indicates no overlap in lice species between older records and our findings from wild birds. Thus, ectoparasites collected from domestic or captive animals may provide biased information on the occurrence, prevalence, host selection, and host-ectoparasite interactions from those on wild hosts.

  1. Chewing Lice of Swan Geese (Anser cygnoides): New Host-Parasite Associations

    PubMed Central

    Choi, Chang-Yong; Takekawa, John Y.; Prosser, Diann J.; Smith, Lacy M.; Ely, Craig R.; Fox, Anthony D.; Cao, Lei; Wang, Xin; Batbayar, Nyambayar; Natsagdorj, Tseveenmayadag; Xiao, Xiangming

    2016-01-01

    Chewing lice (Phthiraptera) that parasitize the globally threatened swan goose Anser cygnoides have been long recognized since the early 19th century, but those records were probably biased towards sampling of captive or domestic geese due to the small population size and limited distribution of its wild hosts. To better understand the lice species parasitizing swan geese that are endemic to East Asia, we collected chewing lice from 14 wild geese caught at 3 lakes in northeastern Mongolia. The lice were morphologically identified as 16 Trinoton anserinum (Fabricius, 1805), 11 Ornithobius domesticus Arnold, 2005, and 1 Anaticola anseris (Linnaeus, 1758). These species are known from other geese and swans, but all of them were new to the swan goose. This result also indicates no overlap in lice species between older records and our findings from wild birds. Thus, ectoparasites collected from domestic or captive animals may provide biased information on the occurrence, prevalence, host selection, and host-ectoparasite interactions from those on wild hosts. PMID:27853128

  2. Pruritus: an underrecognized symptom of small-fiber neuropathies.

    PubMed

    Brenaut, Emilie; Marcorelles, Pascale; Genestet, Steeve; Ménard, Dominique; Misery, Laurent

    2015-02-01

    Small-fiber neuropathies (SFN) are diseases of small nerve fibers that are characterized by autonomic and sensory symptoms. We sought to evaluate sensory symptoms, especially pruritus, in patients with SFN. A questionnaire was given to patients with SFN. In all, 41 patients responded to the questionnaire (71.9% response rate). The most frequent sensory symptoms were burning (77.5%), pain (72.5%), heat sensations (70.2%), and numbness (67.5%). Pruritus was present in 68.3% of patients. It appeared most often in the evening, and was localized to the limbs in a distal-to-proximal gradient, although the back was the most frequent location (64%). Exacerbating factors were fatigue, xerosis, sweating, hot temperature, and stress. Cold water was an alleviating factor. Recall bias associated with filling out the questionnaire, relatively small sample size, and the uncontrolled, retrospective nature of the study were limitations. Pruritus occurs frequently in patients with SFN and could be recognized as a possible presenting symptom, especially if there are other sensory or autonomic symptoms. Copyright © 2014 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  3. Biases in determining the diet of jumbo squid Dosidicus gigas (D' Orbigny 1835) (Cephalopoda: Ommastrephidae) off southern-central Chile (34°S-40°S)

    NASA Astrophysics Data System (ADS)

    Ibáñez, Christian M.; Arancibia, Hugo; Cubillos, Luis A.

    2008-12-01

    The diet of jumbo squid ( Dosidicus gigas) off southern-central Chile is described to examine potential biases in the determination of their main prey. Specimens were collected from catches using different fishing gear (jigging, trawl and purse-seine), from July 2003 to January 2004, and from December 2005 to October 2006. The stomach contents were analyzed in terms of frequency of occurrence, number, and weight of prey items and the diet composition was analyzed using Detrended Correspondence Analysis. In the industrial purse-seine fleet for jack mackerel ( Trachurus murphyi), the dominant prey of D. gigas was T. murphyi. In the industrial mid-trawl fishery for Patagonian grenadier ( Macruronus magellanicus), the dominant species in the diet of D. gigas was M. magellanicus. Similarly, Chilean hake ( Merluccius gayi) was the main prey in the diet of D. gigas obtained in the industrial trawl fishery for Chilean hake; and, in both artisanal fisheries (purse-seine for small pelagics and jigging), small pelagic fish and D. gigas were the main prey in the stomach contents of D. gigas. Cannibalism in D. gigas varied between different fleets and probably is related to stress behavior during fishing. The Detrended Correspondence Analysis ordination showed that the main prey in the diet of D. gigas is associated with the target species of the respective fishery. Consequently, biases are associated with fishing gear, leading to an overestimate in the occurrence of the target species in the diet. We recommend analyzing samples from jigging taken at the same time and place where the trawl and purse-seine fleets are operating to avoid this problem, and the application of new tools like stable isotope, heavy metal, and fatty acid signature analyses.

  4. Evaluation of Equations for Predicting 24-Hour Urinary Sodium Excretion from Casual Urine Samples in Asian Adults.

    PubMed

    Whitton, Clare; Gay, Gibson Ming Wei; Lim, Raymond Boon Tar; Tan, Linda Wei Lin; Lim, Wei-Yen; van Dam, Rob M

    2016-08-01

    The collection of 24-h urine samples for the estimation of sodium intake is burdensome, and the utility of spot urine samples in Southeast Asian populations is unclear. We aimed to assess the validity of prediction equations with the use of spot urine concentrations. A sample of 144 Singapore residents of Chinese, Malay, and Indian ethnicity aged 18-79 y were recruited from the Singapore Health 2 Study conducted in 2014. Participants collected urine for 24 h in multiple small bottles on a single day. To determine the optimal collection time for a spot urine sample, a 1-mL sample was taken from a random bottle collected in the morning, afternoon, and evening. Published equations and a newly derived equation were used to predict 24-h sodium excretion from spot urine samples. The mean ± SD concentration of sodium from the 24-h urine sample was 125 ± 53.4 mmol/d, which is equivalent to 7.2 ± 3.1 g salt. Bland-Altman plots showed good agreement at the group level between estimated and actual 24-h sodium excretion, with biases for the morning period of -3.5 mmol (95% CI: -14.8, 7.8 mmol; new equation) and 1.46 mmol (95% CI: -10.0, 13.0 mmol; Intersalt equation). A larger bias of 25.7 mmol (95% CI: 12.2, 39.3 mmol) was observed for the Tanaka equation in the morning period. The prediction accuracy did not differ significantly for spot urine samples collected at different times of the day or at a random time of day (P = 0.11-0.76). This study suggests that the application of both our own newly derived equation and the Intersalt equation to spot urine concentrations may be useful in predicting group means for 24-h sodium excretion in urban Asian populations. © 2016 American Society for Nutrition.

  5. Running Performance, VO2max, and Running Economy: The Widespread Issue of Endogenous Selection Bias.

    PubMed

    Borgen, Nicolai T

    2018-05-01

    Studies in sport and exercise medicine routinely use samples of highly trained individuals in order to understand what characterizes elite endurance performance, such as running economy and maximal oxygen uptake VO 2max . However, it is not well understood in the literature that using such samples most certainly leads to biased findings and accordingly potentially erroneous conclusions because of endogenous selection bias. In this paper, I review the current literature on running economy and VO 2max , and discuss the literature in light of endogenous selection bias. I demonstrate that the results in a large part of the literature may be misleading, and provide some practical suggestions as to how future studies may alleviate endogenous selection bias.

  6. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    NASA Astrophysics Data System (ADS)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  7. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation.

    PubMed

    Peter, Emanuel K

    2017-12-07

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  8. Modification of cognitive biases related to posttraumatic stress: A systematic review and research agenda.

    PubMed

    Woud, Marcella L; Verwoerd, Johan; Krans, Julie

    2017-06-01

    Cognitive models of Posttraumatic Stress Disorder (PTSD) postulate that cognitive biases in attention, interpretation, and memory represent key factors involved in the onset and maintenance of PTSD. Developments in experimental research demonstrate that it may be possible to manipulate such biases by means of Cognitive Bias Modification (CBM). In the present paper, we summarize studies assessing cognitive biases in posttraumatic stress to serve as a theoretical and methodological background. However, our main aim was to provide an overview of the scientific literature on CBM in (analogue) posttraumatic stress. Results of our systematic literature review showed that most CBM studies targeted attentional and interpretation biases (attention: five studies; interpretation: three studies), and one study modified memory biases. Overall, results showed that CBM can indeed modify cognitive biases and affect (analog) trauma symptoms in a training congruent manner. Interpretation bias procedures seemed effective in analog samples, and memory bias training proved preliminary success in a clinical PTSD sample. Studies of attention bias modification provided more mixed results. This heterogeneous picture may be explained by differences in the type of population or variations in the CBM procedure. Therefore, we sketched a detailed research agenda targeting the challenges for CBM in posttraumatic stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  10. Psychopysics of Remembering: To Bias or Not to Bias?

    ERIC Educational Resources Information Center

    White, K. Geoffrey; Wixted, John T.

    2010-01-01

    Delayed matching to sample is typically a two-alternative forced-choice procedure with two sample stimuli. In this task the effects of varying the probability of reinforcers for correct choices and the resulting receiver operating characteristic are symmetrical. A version of the task where a sample is present on some trials and absent on others is…

  11. Electrochemical force microscopy

    DOEpatents

    Kalinin, Sergei V.; Jesse, Stephen; Collins, Liam F.; Rodriguez, Brian J.

    2017-01-10

    A system and method for electrochemical force microscopy are provided. The system and method are based on a multidimensional detection scheme that is sensitive to forces experienced by a biased electrode in a solution. The multidimensional approach allows separation of fast processes, such as double layer charging, and charge relaxation, and slow processes, such as diffusion and faradaic reactions, as well as capturing the bias dependence of the response. The time-resolved and bias measurements can also allow probing both linear (small bias range) and non-linear (large bias range) electrochemical regimes and potentially the de-convolution of charge dynamics and diffusion processes from steric effects and electrochemical reactivity.

  12. Equilibrium dynamics of the sub-Ohmic spin-boson model under bias

    NASA Astrophysics Data System (ADS)

    Zheng, Da-Chuan; Tong, Ning-Hua

    2017-06-01

    Using the bosonic numerical renormalization group method, we studied the equilibrium dynamical correlation function C(ω) of the spin operator σ z for the biased sub-Ohmic spin-boson model. The small-ω behavior C(ω )\\propto {ω }s is found to be universal and independent of the bias ɛ and the coupling strength α (except at the quantum critical point α ={α }{{c}} and ɛ = 0). Our NRG data also show C(ω )\\propto {χ }2{ω }s for a wide range of parameters, including the biased strong coupling regime (\\varepsilon \

  13. Exploring Attribution Theory and Bias

    ERIC Educational Resources Information Center

    Robinson, Jessica A.

    2017-01-01

    Courses: This activity can be used in a wide range of classes, including interpersonal communication, introduction to communication, and small group communication. Objectives: After completing this activity, students should be able to: (1) define attribution theory, personality attribution, situational attribution, and attribution bias; (2)…

  14. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  15. Further validation of the MMPI-2 and MMPI-2-RF Response Bias Scale: findings from disability and criminal forensic settings.

    PubMed

    Wygant, Dustin B; Sellbom, Martin; Gervais, Roger O; Ben-Porath, Yossef S; Stafford, Kathleen P; Freeman, David B; Heilbronner, Robert L

    2010-12-01

    The present study extends the validation of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) Response Bias Scale (RBS; R. O. Gervais, Y. S. Ben-Porath, D. B. Wygant, & P. Green, 2007) in separate forensic samples composed of disability claimants and criminal defendants. Using cognitive symptom validity tests as response bias indicators, the RBS exhibited large effect sizes (Cohen's ds = 1.24 and 1.48) in detecting cognitive response bias in the disability and criminal forensic samples, respectively. The scale also added incremental prediction to the traditional MMPI-2 and the MMPI-2-RF overreporting validity scales in the disability sample and exhibited excellent specificity with acceptable sensitivity at cutoffs ranging from 90T to 120T. The results of this study indicate that the RBS can add uniquely to the existing MMPI-2 and MMPI-2-RF validity scales in detecting symptom exaggeration associated with cognitive response bias.

  16. Bias due to Preanalytical Dilution of Rodent Serum for Biochemical Analysis on the Siemens Dimension Xpand Plus

    PubMed Central

    Johns, Jennifer L.; Moorhead, Kaitlin A.; Hu, Jing; Moorhead, Roberta C.

    2018-01-01

    Clinical pathology testing of rodents is often challenging due to insufficient sample volume. One solution in clinical veterinary and exploratory research environments is dilution of samples prior to analysis. However, published information on the impact of preanalytical sample dilution on rodent biochemical data is incomplete. The objective of this study was to evaluate the effects of preanalytical sample dilution on biochemical analysis of mouse and rat serum samples utilizing the Siemens Dimension Xpand Plus. Rats were obtained from end of study research projects. Mice were obtained from sentinel testing programs. For both, whole blood was collected via terminal cardiocentesis into empty tubes and serum was harvested. Biochemical parameters were measured on fresh and thawed frozen samples run straight and at dilution factors 2–10. Dilutions were performed manually, utilizing either ultrapure water or enzyme diluent per manufacturer recommendations. All diluted samples were generated directly from the undiluted sample. Preanalytical dilution caused clinically unacceptable bias in most analytes at dilution factors four and above. Dilution-induced bias in total calcium, creatinine, total bilirubin, and uric acid was considered unacceptable with any degree of dilution, based on the more conservative of two definitions of acceptability. Dilution often caused electrolyte values to fall below assay range precluding evaluation of bias. Dilution-induced bias occurred in most biochemical parameters to varying degrees and may render dilution unacceptable in the exploratory research and clinical veterinary environments. Additionally, differences between results obtained at different dilution factors may confound statistical comparisons in research settings. Comparison of data obtained at a single dilution factor is highly recommended. PMID:29497614

  17. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    PubMed

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to modeling resource selection is easily implemented using common statistical tools and promises to provide deeper insight into the movement ecology of animals.

  19. Expectancy bias in anxious samples

    PubMed Central

    Cabeleira, Cindy M.; Steinman, Shari A.; Burgess, Melissa M.; Bucks, Romola S.; MacLeod, Colin; Melo, Wilson; Teachman, Bethany A.

    2014-01-01

    While it is well documented that anxious individuals have negative expectations about the future, it is unclear what cognitive processes give rise to this expectancy bias. Two studies are reported that use the Expectancy Task, which is designed to assess expectancy bias and illuminate its basis. This task presents individuals with valenced scenarios (Positive Valence, Negative Valence, or Conflicting Valence), and then evaluates their tendency to expect subsequent future positive relative to negative events. The Expectancy Task was used with low and high trait anxious (Study 1: N = 32) and anxiety sensitive (Study 2: N = 138) individuals. Results suggest that in the context of physical concerns, both high anxious samples display a less positive expectancy bias. In the context of social concerns, high trait anxious individuals display a negative expectancy bias only when negatively valenced information was previously presented. Overall, this suggests that anxious individuals display a less positive expectancy bias, and that the processes that give rise to this bias may vary by type of situation (e.g., social or physical) or anxiety difficulty. PMID:24798678

  20. Reducing inherent biases introduced during DNA viral metagenome analyses of municipal wastewater

    EPA Science Inventory

    Metagenomics is a powerful tool for characterizing viral composition within environmental samples, but sample and molecular processing steps can bias the estimation of viral community structure. The objective of this study is to understand the inherent variability introduced when...

  1. Rater Perceptions of Bias Using the Multiple Mini-Interview Format: A Qualitative Study

    ERIC Educational Resources Information Center

    Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A.

    2015-01-01

    Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…

  2. Toward a clearer portrayal of confounding bias in instrumental variable applications.

    PubMed

    Jackson, John W; Swanson, Sonja A

    2015-07-01

    Recommendations for reporting instrumental variable analyses often include presenting the balance of covariates across levels of the proposed instrument and levels of the treatment. However, such presentation can be misleading as relatively small imbalances among covariates across levels of the instrument can result in greater bias because of bias amplification. We introduce bias plots and bias component plots as alternative tools for understanding biases in instrumental variable analyses. Using previously published data on proposed preference-based, geography-based, and distance-based instruments, we demonstrate why presenting covariate balance alone can be problematic, and how bias component plots can provide more accurate context for bias from omitting a covariate from an instrumental variable versus non-instrumental variable analysis. These plots can also provide relevant comparisons of different proposed instruments considered in the same data. Adaptable code is provided for creating the plots.

  3. The impact of non-response bias due to sampling in public health studies: A comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health.

    PubMed

    Cheung, Kei Long; Ten Klooster, Peter M; Smit, Cees; de Vries, Hein; Pieterse, Marcel E

    2017-03-23

    In public health monitoring of young people it is critical to understand the effects of selective non-response, in particular when a controversial topic is involved like substance abuse or sexual behaviour. Research that is dependent upon voluntary subject participation is particularly vulnerable to sampling bias. As respondents whose participation is hardest to elicit on a voluntary basis are also more likely to report risk behaviour, this potentially leads to underestimation of risk factor prevalence. Inviting adolescents to participate in a home-sent postal survey is a typical voluntary recruitment strategy with high non-response, as opposed to mandatory participation during school time. This study examines the extent to which prevalence estimates of adolescent health-related characteristics are biased due to different sampling methods, and whether this also biases within-subject analyses. Cross-sectional datasets collected in 2011 in Twente and IJsselland, two similar and adjacent regions in the Netherlands, were used. In total, 9360 youngsters in a mandatory sample (Twente) and 1952 youngsters in a voluntary sample (IJsselland) participated in the study. To test whether the samples differed on health-related variables, we conducted both univariate and multivariable logistic regression analyses controlling for any demographic difference between the samples. Additional multivariable logistic regressions were conducted to examine moderating effects of sampling method on associations between health-related variables. As expected, females, older individuals, as well as individuals with higher education levels, were over-represented in the voluntary sample, compared to the mandatory sample. Respondents in the voluntary sample tended to smoke less, consume less alcohol (ever, lifetime, and past four weeks), have better mental health, have better subjective health status, have more positive school experiences and have less sexual intercourse than respondents in the mandatory sample. No moderating effects were found for sampling method on associations between variables. This is one of first studies to provide strong evidence that voluntary recruitment may lead to a strong non-response bias in health-related prevalence estimates in adolescents, as compared to mandatory recruitment. The resulting underestimation in prevalence of health behaviours and well-being measures appeared large, up to a four-fold lower proportion for self-reported alcohol consumption. Correlations between variables, though, appeared to be insensitive to sampling bias.

  4. Associations among selective attention, memory bias, cognitive errors and symptoms of anxiety in youth.

    PubMed

    Watts, Sarah E; Weems, Carl F

    2006-12-01

    The purpose of this study was to examine the linkages among selective attention, memory bias, cognitive errors, and anxiety problems by testing a model of the interrelations among these cognitive variables and childhood anxiety disorder symptoms. A community sample of 81 youth (38 females and 43 males) aged 9-17 years and their parents completed measures of the child's anxiety disorder symptoms. Youth completed assessments measuring selective attention, memory bias, and cognitive errors. Results indicated that selective attention, memory bias, and cognitive errors were each correlated with childhood anxiety problems and provide support for a cognitive model of anxiety which posits that these three biases are associated with childhood anxiety problems. Only limited support for significant interrelations among selective attention, memory bias, and cognitive errors was found. Finally, results point towards an effective strategy for moving the assessment of selective attention to younger and community samples of youth.

  5. Effectiveness of tranexamic acid for decreasing bleeding in prostate surgery: a systematic review and meta-analysis

    PubMed Central

    Garcia-Perdomo, Herney Andres

    2017-01-01

    Introduction The objective of this study was to determine the effectiveness of tranexamic acid in decreasing bleeding in patients undergoing prostate surgery. Material and methods All clinical experiments were included without language restrictions. The inclusion criteria were as follows: men over 18 years of age who underwent prostate surgery (transurethral, prostate adenectomy, and radical prostatectomy) and received tranexamic acid prior to prostate surgery as a preventive measure for perioperative hemorrhage. Prophylactic tranexamic acid vs. no intervention or placebo were compared. The primary outcomes were as follows: 1) intraoperative blood loss and 2) the need for red blood cell transfusion. A systematic search was performed in MEDLINE, EMBASE, CENTRAL and LILACS. Other sources were used to discover published and unpublished literature sources. The statistical analysis was performed in Review Manager v.5.3. Results Four studies were included with a total of 436 patients. Three of the four studies had small sample sizes. There was a low risk of attrition bias and reporting bias. Unclear risk of selection bias, performance bias, or detection bias was presented. A mean difference (MD) of -174.49 [95% CI (-248.43 to -100.56)] was found for perioperative blood loss (the primary outcome). At the end of the procedure, the hemoglobin concentration had a MD of -1.19 [95% CI (-4.37 to 1.99)]. Conclusions Tranexamic acid is effective at preventing perioperative blood loss compared with the placebo in patients undergoing transurethral resection of the prostate (TURP). However, this treatment was not effective neither at preventing the need for transfusions nor at increasing hemoglobin values at the end of the procedure. PMID:29732210

  6. Belief-bias reasoning in non-clinical delusion-prone individuals.

    PubMed

    Anandakumar, T; Connaughton, E; Coltheart, M; Langdon, R

    2017-03-01

    It has been proposed that people with delusions have difficulty inhibiting beliefs (i.e., "doxastic inhibition") so as to reason about them as if they might not be true. We used a continuity approach to test this proposal in non-clinical adults scoring high and low in psychometrically assessed delusion-proneness. High delusion-prone individuals were expected to show greater difficulty than low delusion-prone individuals on "conflict" items of a "belief-bias" reasoning task (i.e. when required to reason logically about statements that conflicted with reality), but not on "non-conflict" items. Twenty high delusion-prone and twenty low delusion-prone participants (according to the Peters et al. Delusions Inventory) completed a belief-bias reasoning task and tests of IQ, working memory and general inhibition (Excluded Letter Fluency, Stroop and Hayling Sentence Completion). High delusion-prone individuals showed greater difficulty than low delusion-prone individuals on the Stroop and Excluded Letter Fluency tests of inhibition, but no greater difficulty on the conflict versus non-conflict items of the belief-bias task. They did, however, make significantly more errors overall on the belief-bias task, despite controlling for IQ, working memory and general inhibitory control. The study had a relatively small sample size and used non-clinical participants to test a theory of cognitive processing in individuals with clinically diagnosed delusions. Results failed to support a role for doxastic inhibitory failure in non-clinical delusion-prone individuals. These individuals did, however, show difficulty with conditional reasoning about statements that may or may not conflict with reality, independent of any general cognitive or inhibitory deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Belief-bias reasoning in non-clinical delusion-prone individuals.

    PubMed

    Anandakumar, T; Connaughton, E; Coltheart, M; Langdon, R

    2017-09-01

    It has been proposed that people with delusions have difficulty inhibiting beliefs (i.e., "doxastic inhibition") so as to reason about them as if they might not be true. We used a continuity approach to test this proposal in non-clinical adults scoring high and low in psychometrically assessed delusion-proneness. High delusion-prone individuals were expected to show greater difficulty than low delusion-prone individuals on "conflict" items of a "belief-bias" reasoning task (i.e. when required to reason logically about statements that conflicted with reality), but not on "non-conflict" items. Twenty high delusion-prone and twenty low delusion-prone participants (according to the Peters et al. Delusions Inventory) completed a belief-bias reasoning task and tests of IQ, working memory and general inhibition (Excluded Letter Fluency, Stroop and Hayling Sentence Completion). High delusion-prone individuals showed greater difficulty than low delusion-prone individuals on the Stroop and Excluded Letter Fluency tests of inhibition, but no greater difficulty on the conflict versus non-conflict items of the belief-bias task. They did, however, make significantly more errors overall on the belief-bias task, despite controlling for IQ, working memory and general inhibitory control. The study had a relatively small sample size and used non-clinical participants to test a theory of cognitive processing in individuals with clinically diagnosed delusions. Results failed to support a role for doxastic inhibitory failure in non-clinical delusion-prone individuals. These individuals did, however, show difficulty with conditional reasoning about statements that may or may not conflict with reality, independent of any general cognitive or inhibitory deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Estimating bias in causes of death ascertainment in the Finnish Randomized Study of Screening for Prostate Cancer.

    PubMed

    Kilpeläinen, Tuomas P; Mäkinen, Tuukka; Karhunen, Pekka J; Aro, Jussi; Lahtela, Jorma; Taari, Kimmo; Talala, Kirsi; Tammela, Teuvo L J; Auvinen, Anssi

    2016-12-01

    Precise cause of death (CoD) ascertainment is crucial in any cancer screening trial to avoid bias from misclassification due to excessive recording of diagnosed cancer as a CoD in death certificates instead of non-cancer disease that actually caused death. We estimated whether there was bias in CoD determination between screening (SA) and control arms (CA) in a population-based prostate cancer (PCa) screening trial. Our trial is the largest component of the European Randomized Study of Screening for Prostate Cancer with more than 80,000 men. Randomly selected deaths in men with PCa (N=442/2568 cases, 17.2%) were reviewed by an independent CoD committee. Median follow-up was 16.8 years in both arms. Overdiagnosis of PCa was present in the SA as the risk ratio for PCa incidence was 1.19 (95% confidence interval (CI) 1.14-1.24). The hazard ratio (HR) for PCa mortality was 0.94 (95%CI 0.82-1.08) in favor of the SA. Agreement with official CoD registry was 94.6% (κ=0.88) in the SA and 95.4% (κ=0.91) in the CA. Altogether 14 PCa deaths were estimated as false-positive in both arms and exclusion of these resulted in HR 0.92 (95% CI 0.80-1.06). A small differential misclassification bias in ascertainment of CoD was present, most likely due to attribution bias (overdiagnosis in the SA). Maximum precision in CoD ascertainment can only be achieved with independent review of all deaths in the diseased population. However, this is cumbersome and expensive and may provide little benefit compared to random sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Growing cell-phone population and noncoverage bias in traditional random digit dial telephone health surveys.

    PubMed

    Lee, Sunghee; Brick, J Michael; Brown, E Richard; Grant, David

    2010-08-01

    Examine the effect of including cell-phone numbers in a traditional landline random digit dial (RDD) telephone survey. The 2007 California Health Interview Survey (CHIS). CHIS 2007 is an RDD telephone survey supplementing a landline sample in California with a sample of cell-only (CO) adults. We examined the degree of bias due to exclusion of CO populations and compared a series of demographic and health-related characteristics by telephone usage. When adjusted for noncoverage in the landline sample through weighting, the potential noncoverage bias due to excluding CO adults in landline telephone surveys is diminished. Both CO adults and adults who have both landline and cell phones but mostly use cell phones appear different from other telephone usage groups. Controlling for demographic differences did not attenuate the significant distinctiveness of cell-mostly adults. While careful weighting can mitigate noncoverage bias in landline telephone surveys, the rapid growth of cell-phone population and their distinctive characteristics suggest it is important to include a cell-phone sample. Moreover, the threat of noncoverage bias in telephone health survey estimates could mislead policy makers with possibly serious consequences for their ability to address important health policy issues.

  10. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  11. Comparison of three point-of-care blood glucose meters for use in adult and juvenile alpacas.

    PubMed

    Tennent-Brown, Brett S; Koenig, Amie; Williamson, Lisa H; Boston, Raymond C

    2011-08-01

    To compare the performance of 3 point-of-care glucose meters in adult and juvenile alpacas with that of a laboratory-based analyzer. Evaluation study. 35 adult alpacas and 21 juvenile alpacas. Whole blood samples obtained via jugular venipuncture were tested with all 3 point-of-care glucose meters; plasma samples were also tested with 1 of those meters. Glucose concentrations determined by use of the point-of-care meters were compared with results from the laboratory-based analyzer. Plasma glucose concentrations determined by use of the laboratory-based analyzer ranged from 36 to 693 mg/dL. Over the entire range of glucose concentrations tested, the Lin concordance correlation coefficient (agreement) was significant and excellent for all comparisons. Concordance decreased for 1 glucometer when testing whole blood samples over a narrower range of glucose concentrations (50 to 200 mg/dL). Bias was typically small (< 10 mg/dL) for 3 of the 4 comparisons but considerable for 1 meter with the use of whole blood. The limits of agreement were wide for all comparisons over the entire range of glucose concentrations tested but decreased to within acceptable limits when the narrower glucose range (50 to 200 mg/dL) was analyzed for 3 of the comparisons. For samples with a PCV < 25%, bias and the limits of agreement were greater for one of the meters tested. Discrepancies between point-of-care glucose meters and reference techniques can be considerable in alpacas, emphasizing the importance of assessing individual meter performance in a target population.

  12. Sampling Considerations for Designing Aedes aegypti (Diptera: Culicidae) Oviposition Studies in Iquitos, Peru: Substrate Preference, Diurnal Periodicity, and Gonotrophic Cycle Length

    PubMed Central

    WONG, JACKLYN; ASTETE, HELVIO; MORRISON, AMY C.; SCOTT, THOMAS W.

    2011-01-01

    When devising methods to sample Aedes aegypti (L.) eggs from naturally-occurring containers to investigate selective oviposition, failure to take into account certain aspects of Ae. aegypti behavior can bias study inferences. In Iquitos, Peru, we tested three assumptions related to designing Ae. aegypti oviposition field studies, as follows: 1) lining containers with paper as an oviposition substrate does not affect oviposition; 2) diurnal egg-laying activity peaks in the late afternoon or early evening, and there is little oviposition during midday; and 3) the gonotrophic cycle length of wild females averages from 3 to 4 d. When wild females were presented with containers lined and unlined with paper toweling, the presence of paper increased oviposition in plastic and metal containers, but had no effect in cement containers. Recording the number of eggs laid by Ae. aegypti every 2 h throughout the day delineated a bimodal diurnal oviposition pattern, with a small morning peak, decreased activity during midday, and a predominant peak in the late afternoon and evening from 16:00 to 20:00 h. Daily monitoring of captive individual F0 females revealed that the gonotrophic cycle length was typically 3– 4 d for the Iquitos population. These findings will be used to adjust field study design to 1) account for sampling eggs using paper toweling, and 2) determine the time of day and number of days over which to sample Ae. aegypti eggs. We explored how failure to consider these behaviors could potentially bias field assessments of oviposition preferences. PMID:21337947

  13. Population variability complicates the accurate detection of climate change responses.

    PubMed

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. © 2016 John Wiley & Sons Ltd.

  14. Improving the chi-squared approximation for bivariate normal tolerance regions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.

    1993-01-01

    Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.

  15. Liver Gene Expression Profiles of Rats Treated with Clofibric Acid

    PubMed Central

    Michel, Cécile; Desdouets, Chantal; Sacre-Salem, Béatrice; Gautier, Jean-Charles; Roberts, Ruth; Boitier, Eric

    2003-01-01

    Clofibric acid (CLO) is a peroxisome proliferator (PP) that acts through the peroxisome proliferator activated receptor α, leading to hepatocarcinogenesis in rodents. CLO-induced hepatocarcinogenesis is a multi-step process, first transforming normal liver cells into foci. The combination of laser capture microdissection (LCM) and genomics has the potential to provide expression profiles from such small cell clusters, giving an opportunity to understand the process of cancer development in response to PPs. To our knowledge, this is the first evaluation of the impact of the successive steps of LCM procedure on gene expression profiling by comparing profiles from LCM samples to those obtained with non-microdissected liver samples collected after a 1 month CLO treatment in the rat. We showed that hematoxylin and eosin (H&E) staining and laser microdissection itself do not impact on RNA quality. However, the overall process of the LCM procedure affects the RNA quality, resulting in a bias in the gene profiles. Nonetheless, this bias did not prevent accurate determination of a CLO-specific molecular signature. Thus, gene-profiling analysis of microdissected foci, identified by H&E staining may provide insight into the mechanisms underlying non-genotoxic hepatocarcinogenesis in the rat by allowing identification of specific genes that are regulated by CLO in early pre-neoplastic foci. PMID:14633594

  16. Liver gene expression profiles of rats treated with clofibric acid: comparison of whole liver and laser capture microdissected liver.

    PubMed

    Michel, Cécile; Desdouets, Chantal; Sacre-Salem, Béatrice; Gautier, Jean-Charles; Roberts, Ruth; Boitier, Eric

    2003-12-01

    Clofibric acid (CLO) is a peroxisome proliferator (PP) that acts through the peroxisome proliferator activated receptor alpha, leading to hepatocarcinogenesis in rodents. CLO-induced hepatocarcinogenesis is a multi-step process, first transforming normal liver cells into foci. The combination of laser capture microdissection (LCM) and genomics has the potential to provide expression profiles from such small cell clusters, giving an opportunity to understand the process of cancer development in response to PPs. To our knowledge, this is the first evaluation of the impact of the successive steps of LCM procedure on gene expression profiling by comparing profiles from LCM samples to those obtained with non-microdissected liver samples collected after a 1 month CLO treatment in the rat. We showed that hematoxylin and eosin (H&E) staining and laser microdissection itself do not impact on RNA quality. However, the overall process of the LCM procedure affects the RNA quality, resulting in a bias in the gene profiles. Nonetheless, this bias did not prevent accurate determination of a CLO-specific molecular signature. Thus, gene-profiling analysis of microdissected foci, identified by H&E staining may provide insight into the mechanisms underlying non-genotoxic hepatocarcinogenesis in the rat by allowing identification of specific genes that are regulated by CLO in early pre-neoplastic foci.

  17. Is Small Still Beautiful for the Strengths and Difficulties Questionnaire? Novel Findings Using Exploratory Structural Equation Modeling.

    PubMed

    Garrido, Luis Eduardo; Barrada, Juan Ramón; Aguasvivas, José Armando; Martínez-Molina, Agustín; Arias, Víctor B; Golino, Hudson F; Legaz, Eva; Ferrís, Gloria; Rojo-Moreno, Luis

    2018-06-01

    During the present decade a large body of research has employed confirmatory factor analysis (CFA) to evaluate the factor structure of the Strengths and Difficulties Questionnaire (SDQ) across multiple languages and cultures. However, because CFA can produce strongly biased estimations when the population cross-loadings differ meaningfully from zero, it may not be the most appropriate framework to model the SDQ responses. With this in mind, the current study sought to assess the factorial structure of the SDQ using the more flexible exploratory structural equation modeling approach. Using a large-scale Spanish sample composed of 67,253 youths aged between 10 and 18 years ( M = 14.16, SD = 1.07), the results showed that CFA provided a severely biased and overly optimistic assessment of the underlying structure of the SDQ. In contrast, exploratory structural equation modeling revealed a generally weak factorial structure, including questionable indicators with large cross-loadings, multiple error correlations, and significant wording variance. A subsequent Monte Carlo study showed that sample sizes greater than 4,000 would be needed to adequately recover the SDQ loading structure. The findings from this study prevent recommending the SDQ as a screening tool and suggest caution when interpreting previous results in the literature based on CFA modeling.

  18. Uncertainty Estimation for the Determination of Ni, Pb and Al in Natural Water Samples by SPE-ICP-OES

    NASA Astrophysics Data System (ADS)

    Ghorbani, A.; Farahani, M. Mahmoodi; Rabbani, M.; Aflaki, F.; Waqifhosain, Syed

    2008-01-01

    In this paper we propose uncertainty estimation for the analytical results we obtained from determination of Ni, Pb and Al by solidphase extraction and inductively coupled plasma optical emission spectrometry (SPE-ICP-OES). The procedure is based on the retention of analytes in the form of 8-hydroxyquinoline (8-HQ) complexes on a mini column of XAD-4 resin and subsequent elution with nitric acid. The influence of various analytical parameters including the amount of solid phase, pH, elution factors (concentration and volume of eluting solution), volume of sample solution, and amount of ligand on the extraction efficiency of analytes was investigated. To estimate the uncertainty of analytical result obtained, we propose assessing trueness by employing spiked sample. Two types of bias are calculated in the assessment of trueness: a proportional bias and a constant bias. We applied Nested design for calculating proportional bias and Youden method to calculate the constant bias. The results we obtained for proportional bias are calculated from spiked samples. In this case, the concentration found is plotted against the concentration added and the slop of standard addition curve is an estimate of the method recovery. Estimated method of average recovery in Karaj river water is: (1.004±0.0085) for Ni, (0.999±0.010) for Pb and (0.987±0.008) for Al.

  19. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  20. Selection bias in studies of human reproduction-longevity trade-offs.

    PubMed

    Helle, Samuli

    2017-12-13

    A shorter lifespan as a potential cost of high reproductive effort in humans has intrigued researchers for more than a century. However, the results have been inconclusive so far and despite strong theoretical expectations we do not currently have compelling evidence for the longevity costs of reproduction. Using Monte Carlo simulation, it is shown here that a common practice in human reproduction-longevity studies using historical data (the most relevant data sources for this question), the omission of women who died prior to menopausal age from the analysis, results in severe underestimation of the potential underlying trade-off between reproduction and lifespan. In other words, assuming that such a trade-off is expressed also during reproductive years, the strength of the trade-off between reproduction and lifespan is progressively weakened when women dying during reproductive ages are sequentially and non-randomly excluded from the analysis. In cases of small sample sizes (e.g. few hundreds of observations), this selection bias by reducing statistical power may even partly explain the null results commonly found in this field. Future studies in this field should thus apply statistical approaches that account for or avoid selection bias in order to recover reliable effect size estimates between reproduction and longevity. © 2017 The Author(s).

  1. Complete mitochondrial genome sequence of Urechis caupo, a representative of the phylum Echiura

    PubMed Central

    Boore, Jeffrey L

    2004-01-01

    Background Mitochondria contain small genomes that are physically separate from those of nuclei. Their comparison serves as a model system for understanding the processes of genome evolution. Although hundreds of these genome sequences have been reported, the taxonomic sampling is highly biased toward vertebrates and arthropods, with many whole phyla remaining unstudied. This is the first description of a complete mitochondrial genome sequence of a representative of the phylum Echiura, that of the fat innkeeper worm, Urechis caupo. Results This mtDNA is 15,113 nts in length and 62% A+T. It contains the 37 genes that are typical for animal mtDNAs in an arrangement somewhat similar to that of annelid worms. All genes are encoded by the same DNA strand which is rich in A and C relative to the opposite strand. Codons ending with the dinucleotide GG are more frequent than would be expected from apparent mutational biases. The largest non-coding region is only 282 nts long, is 71% A+T, and has potential for secondary structures. Conclusions Urechis caupo mtDNA shares many features with those of the few studied annelids, including the common usage of ATG start codons, unusual among animal mtDNAs, as well as gene arrangements, tRNA structures, and codon usage biases. PMID:15369601

  2. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.

  3. Clinical and psychopathological features associated with treatment-emergent mania in bipolar-II depressed outpatients exposed to antidepressants.

    PubMed

    Fornaro, Michele; Anastasia, Annalisa; Monaco, Francesco; Novello, Stefano; Fusco, Andrea; Iasevoli, Felice; De Berardis, Domenico; Veronese, Nicola; Solmi, Marco; de Bartolomeis, Andrea

    2018-07-01

    Treatment-emergent affective switch (TEAS), including treatment-emergent mania (TEM), carry significant burden in the clinical management of bipolar depression, whereas the use of antidepressants raises both efficacy, safety and tolerability concerns. The present study assesses the prevalence and clinical correlates of TEM in selected sample of Bipolar Disorder (BD) Type-II (BD-II) acute depression outpatients. Post-hoc analysis of the clinical and psychopathological features associated with TEM among 91 BD-II depressed outpatients exposed to antidepressants. Second-generation antipsychotics (SGA) (p = .005), lithium (≤ .001), cyclothymic/irritable/hyperthymic temperaments (p = ≤ .001; p = .001; p = .003, respectively), rapid-cycling (p = .005) and depressive mixed features (p = .003) differed between TEM + cases vs. TEM - controls. Upon multinomial logistic regression, the accounted psychopathological features correctly classified as much as 88.6% of TEM + cases (35/91 overall sample, or 38.46% of the sample), yet not statistically significantly [Exp(B) = .032; p = ns]. Specifically, lithium [B = - 2.385; p = .001], SGAs [B = - 2.354; p = .002] predicted lower rates of TEM + in contrast to the number of lifetime previous psychiatric hospitalizations [B = 2.380; p = .002], whereas mixed features did not [B = 1.267; p = ns]. Post-hoc analysis. Lack of systematic pharmacological history record; chance of recall bias and Berkson's biases. Permissive operational criterion for TEM. Relatively small sample size. Cyclothymic temperament and mixed depression discriminated TEM + between TEM - cases, although only lithium and the SGAs reliably predicted TEM +/- grouping. Larger-sampled/powered longitudinal replication studies are warranted to allow firm conclusions on the matter, ideally contributing to the identification of clear-cut sub-phenotypes of BD towards patient-tailored-pharmacotherapy. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Galaxy And Mass Assembly (GAMA): colour- and luminosity-dependent clustering from calibrated photometric redshifts

    NASA Astrophysics Data System (ADS)

    Christodoulou, L.; Eminian, C.; Loveday, J.; Norberg, P.; Baldry, I. K.; Hurley, P. D.; Driver, S. P.; Bamford, S. P.; Hopkins, A. M.; Liske, J.; Peacock, J. A.; Bland-Hawthorn, J.; Brough, S.; Cameron, E.; Conselice, C. J.; Croom, S. M.; Frenk, C. S.; Gunawardhana, M.; Jones, D. H.; Kelvin, L. S.; Kuijken, K.; Nichol, R. C.; Parkinson, H.; Pimbblet, K. A.; Popescu, C. C.; Prescott, M.; Robotham, A. S. G.; Sharp, R. G.; Sutherland, W. J.; Taylor, E. N.; Thomas, D.; Tuffs, R. J.; van Kampen, E.; Wijesinghe, D.

    2012-09-01

    We measure the two-point angular correlation function of a sample of 4289 223 galaxies with r < 19.4 mag from the Sloan Digital Sky Survey (SDSS) as a function of photometric redshift, absolute magnitude and colour down to Mr - 5 log h = -14 mag. Photometric redshifts are estimated from ugriz model magnitudes and two Petrosian radii using the artificial neural network package ANNz, taking advantage of the Galaxy And Mass Assembly (GAMA) spectroscopic sample as our training set. These photometric redshifts are then used to determine absolute magnitudes and colours. For all our samples, we estimate the underlying redshift and absolute magnitude distributions using Monte Carlo resampling. These redshift distributions are used in Limber's equation to obtain spatial correlation function parameters from power-law fits to the angular correlation function. We confirm an increase in clustering strength for sub-L* red galaxies compared with ˜L* red galaxies at small scales in all redshift bins, whereas for the blue population the correlation length is almost independent of luminosity for ˜L* galaxies and fainter. A linear relation between relative bias and log luminosity is found to hold down to luminosities L ˜ 0.03L*. We find that the redshift dependence of the bias of the L* population can be described by the passive evolution model of Tegmark & Peebles. A visual inspection of a random sample from our r < 19.4 sample of SDSS galaxies reveals that about 10 per cent are spurious, with a higher contamination rate towards very faint absolute magnitudes due to over-deblended nearby galaxies. We correct for this contamination in our clustering analysis.

  5. Association between attention bias to threat and anxiety symptoms in children and adolescents.

    PubMed

    Abend, Rany; de Voogd, Leone; Salemink, Elske; Wiers, Reinout W; Pérez-Edgar, Koraly; Fitzgerald, Amanda; White, Lauren K; Salum, Giovanni A; He, Jie; Silverman, Wendy K; Pettit, Jeremy W; Pine, Daniel S; Bar-Haim, Yair

    2018-03-01

    Considerable research links threat-related attention biases to anxiety symptoms in adults, whereas extant findings on threat biases in youth are limited and mixed. Inconsistent findings may arise due to substantial methodological variability and limited sample sizes, emphasizing the need for systematic research on large samples. The aim of this report is to examine the association between threat bias and pediatric anxiety symptoms using standardized measures in a large, international, multi-site youth sample. A total of 1,291 children and adolescents from seven research sites worldwide completed standardized attention bias assessment task (dot-probe task) and child anxiety symptoms measure (Screen for Child Anxiety Related Emotional Disorders). Using a dimensional approach to symptomatology, we conducted regression analyses predicting overall, and disorder-specific, anxiety symptoms severity, based on threat bias scores. Threat bias correlated positively with overall anxiety symptoms severity (ß = 0.078, P = .004). Furthermore, threat bias was positively associated specifically with social anxiety (ß = 0.072, P = .008) and school phobia (ß = 0.076, P = .006) symptoms severity, but not with panic, generalized anxiety, or separation anxiety symptoms. These associations were not moderated by age or gender. These findings indicate associations between threat bias and pediatric anxiety symptoms, and suggest that vigilance to external threats manifests more prominently in symptoms of social anxiety and school phobia, regardless of age and gender. These findings point to the role of attention bias to threat in anxiety, with implications for translational clinical research. The significance of applying standardized methods in multi-site collaborations for overcoming challenges inherent to clinical research is discussed. © 2017 Wiley Periodicals, Inc.

  6. Halo assembly bias and the tidal anisotropy of the local halo environment

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Hahn, Oliver; Sheth, Ravi K.

    2018-05-01

    We study the role of the local tidal environment in determining the assembly bias of dark matter haloes. Previous results suggest that the anisotropy of a halo's environment (i.e. whether it lies in a filament or in a more isotropic region) can play a significant role in determining the eventual mass and age of the halo. We statistically isolate this effect, using correlations between the large-scale and small-scale environments of simulated haloes at z = 0 with masses between 1011.6 ≲ (m/h-1 M⊙) ≲ 1014.9. We probe the large-scale environment, using a novel halo-by-halo estimator of linear bias. For the small-scale environment, we identify a variable αR that captures the tidal anisotropy in a region of radius R = 4R200b around the halo and correlates strongly with halo bias at fixed mass. Segregating haloes by αR reveals two distinct populations. Haloes in highly isotropic local environments (αR ≲ 0.2) behave as expected from the simplest, spherically averaged analytical models of structure formation, showing a negative correlation between their concentration and large-scale bias at all masses. In contrast, haloes in anisotropic, filament-like environments (αR ≳ 0.5) tend to show a positive correlation between bias and concentration at any mass. Our multiscale analysis cleanly demonstrates how the overall assembly bias trend across halo mass emerges as an average over these different halo populations, and provides valuable insights towards building analytical models that correctly incorporate assembly bias. We also discuss potential implications for the nature and detectability of galaxy assembly bias.

  7. Particle-in-cell study of the ion-to-electron sheath transition

    DOE PAGES

    Scheiner, Brett; Baalrud, Scott D.; Hopkins, Matthew M.; ...

    2016-08-09

    The form of a sheath near a small electrode, with bias changing from below to above the plasma potential, is studied using 2D particle-in-cell simulations. When the electrode is biased within T e/2 e below the plasma potential, the electron velocity distribution functions (EVDFs) exhibit a loss-cone type truncation due to fast electrons overcoming the small potential difference between the electrode and plasma. No sheath is present in this regime, and the plasma remains quasineutral up to the electrode. The EVDF truncation leads to a presheath-like density and flow velocity gradients. Once the bias exceeds the plasma potential, an electronmore » sheath is present. In this case, the truncation driven behavior persists, but is accompanied by a shift in the maximum value of the EVDF that is not present in the negative bias cases. In conclusion, the flow moment has significant contributions from both the flow shift of the EVDF maximum, and the loss-cone truncation.« less

  8. Not all numbers are equal: preferences and biases among children and adults when generating random sequences.

    PubMed

    Towse, John N; Loetscher, Tobias; Brugger, Peter

    2014-01-01

    We investigate the number preferences of children and adults when generating random digit sequences. Previous research has shown convincingly that adults prefer smaller numbers when randomly choosing between responses 1-6. We analyze randomization choices made by both children and adults, considering a range of experimental studies and task configurations. Children - most of whom are between 8 and 11~years - show a preference for relatively large numbers when choosing numbers 1-10. Adults show a preference for small numbers with the same response set. We report a modest association between children's age and numerical bias. However, children also exhibit a small number bias with a smaller response set available, and they show a preference specifically for the numbers 1-3 across many datasets. We argue that number space demonstrates both continuities (numbers 1-3 have a distinct status) and change (a developmentally emerging bias toward the left side of representational space or lower numbers).

  9. Statin treatment and functional outcome after ischemic stroke: case-control and meta-analysis.

    PubMed

    Biffi, Alessandro; Devan, William J; Anderson, Christopher D; Cortellini, Lynelle; Furie, Karen L; Rosand, Jonathan; Rost, Natalia S

    2011-05-01

    Multiple studies suggest that statin use before acute ischemic stroke is associated with improved functional outcome. However, available evidence is conflicting, and several published reports are limited by small sample sizes. We therefore investigated the effect of antecedent use of statins on stroke outcome by performing a meta-analysis of all results from published studies as well as our own unpublished data. We performed a systematic literature search and meta-analysis of studies investigating the association between prestroke statin use and clinical outcome and included additional data from 126 prestroke statin users and 767 nonusers enrolled at our institution. A total of 12 studies, comprising 2013 statin users and 9682 nonusers, was meta-analyzed using a random effects model. We also meta-analyzed results for individual Trial of ORG 10172 in Acute Stroke Treatment stroke subtypes to determine whether the effect of statin use differed across subtypes using the Breslow-Day test. Meta-analysis of all available data identified an association between prestroke statin use and improved functional outcome (OR, 1.62; 95% CI, 1.39 to 1.88), but we uncovered evidence of publication bias. The effect of statin use on functional outcome was found to be larger for small vessel strokes compared with other subtypes (Breslow-Day P=0.008). Antecedent use of statins is associated with improved outcome in patients with acute ischemic stroke. This association appears to be stronger in patients with small vessel stroke subtype. However, evidence of publication bias in the existing literature suggests these findings should be interpreted with caution.

  10. Statin Treatment and Functional Outcome after Ischemic Stroke: Case-control and Meta-analysis

    PubMed Central

    Biffi, A; Devan, WJ; Anderson, CD; Cortellini, L; Furie, KL; Rosand, J; Rost, NS

    2011-01-01

    Background and Purpose Multiple studies suggest that statin use prior to acute ischemic stroke (AIS) is associated with improved functional outcome. However, available evidence is conflicting, and several published reports are limited by small sample sizes. We therefore investigated the effect of antecedent use of statins on stroke outcome by performing a meta-analysis of all results from published studies as well as our own unpublished data. Methods We performed a systematic literature search and meta-analysis of studies investigating the association between pre-stroke statin use and clinical outcome, and included additional data from 126 pre-stroke statin users and 767 non-users enrolled at our Institution. A total of 12 studies, comprising 2013 statin users and 9682 non- users were meta-analyzed using a random effects model. We also meta-analyzed results for individual TOAST stroke subtypes to determine whether the effect of statin use differed across subtypes, using the Breslow-Day (BD) test. Results Meta-analysis of all available data identified an association between pre-stroke statin use and improved functional outcome (Odds Ratio = 1.62, 95% Confidence Interval: 1.39 -1.88), but we uncovered evidence of publication bias. The effect of statin use on functional outcome was found to be larger for small vessel strokes compared to other subtypes (BD p = 0.008). Conclusions Antecedent use of statins is associated with improved outcome in AIS patients. This association appears to be stronger in patients with small vessel stroke subtype. However, evidence of publication bias in the existing literature suggests these findings should be interpreted with caution. PMID:21415396

  11. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  12. Differences in Preschool Children's Conceptual Strategies When Thinking about Animate Entities and Artifacts.

    ERIC Educational Resources Information Center

    Blanchet, Nicole; Dunham, Philip J.; Dunham, Frances

    2001-01-01

    Preschoolers viewed stimulus sets comprised of a sample picture and three types of matches and were asked to choose a match that "went with" each sample. Children's choices indicated that a shift occurs between 3 and 4 years of age from a taxonomic bias to a thematic bias. Animate sample stimuli enhanced children's tendency to adopt…

  13. Experimental measurement of the plasma conductivity of Z93 and Z93P thermal control paint

    NASA Technical Reports Server (NTRS)

    Hillard, G. Barry

    1993-01-01

    Two samples each of Z93 and Z93P thermal control paint were exposed to a simulated space environment in a plasma chamber. The samples were biased through a series of voltages ranging from -200 volts to +300 volts and electron and ion currents measured. By comparing the currents to those of pure metal samples of the same size and shape, the conductivity of the samples was calculated. Measured conductivity was dependent on the bias potential in all cases. For Z93P, conductivity was approximately constant over much of the bias range and we find a value of 0.5 micro-mhos per square meter for both electron and ion current. For Z93, the dependence on bias was much more pronounced but conductivity can be said to be approximately one order of magnitude larger. In addition to presenting these results, this report documents all of the experimental data as well as the statistical analyses performed.

  14. Threat-Related Attention Bias Variability and Posttraumatic Stress.

    PubMed

    Naim, Reut; Abend, Rany; Wald, Ilan; Eldar, Sharon; Levi, Ofir; Fruchter, Eyal; Ginat, Karen; Halpern, Pinchas; Sipos, Maurice L; Adler, Amy B; Bliese, Paul D; Quartana, Phillip J; Pine, Daniel S; Bar-Haim, Yair

    2015-12-01

    Threat monitoring facilitates survival by allowing one to efficiently and accurately detect potential threats. Traumatic events can disrupt healthy threat monitoring, inducing biased and unstable threat-related attention deployment. Recent research suggests that greater attention bias variability, that is, attention fluctuations alternating toward and away from threat, occurs in participants with PTSD relative to healthy comparison subjects who were either exposed or not exposed to traumatic events. The current study extends findings on attention bias variability in PTSD. Previous measurement of attention bias variability was refined by employing a moving average technique. Analyses were conducted across seven independent data sets; in each, data on attention bias variability were collected by using variants of the dot-probe task. Trauma-related and anxiety symptoms were evaluated across samples by using structured psychiatric interviews and widely used self-report questionnaires, as specified for each sample. Analyses revealed consistent evidence of greater attention bias variability in patients with PTSD following various types of traumatic events than in healthy participants, participants with social anxiety disorder, and participants with acute stress disorder. Moreover, threat-related, and not positive, attention bias variability was correlated with PTSD severity. These findings carry possibilities for using attention bias variability as a specific cognitive marker of PTSD and for tailoring protocols for attention bias modification for this disorder.

  15. Directional asymmetry of pelvic vestiges in threespine stickleback.

    PubMed

    Bell, Michael A; Khalef, Victoria; Travis, Matthew P

    2007-03-15

    Extensive reduction of the size and complexity of the pelvic skeleton (i.e., pelvic reduction) has evolved repeatedly in Gasterosteus aculeatus. Asymmetrical pelvic vestiges tend to be larger on the left side (i.e., left biased) in populations studied previously. Loss of Pitx1 expression is associated with pelvic reduction in G. aculeatus, and pelvic reduction maps to the Pitx1 locus. Pitx1 knockouts in mice have reduced hind limbs, but the left limb is larger. Thus left-biased directional asymmetry of stickleback pelvic vestiges may indicate the involvement of Pitx1 in pelvic reduction. We examined 6,356 specimens from 27 Cook Inlet populations of G. aculeatus with extensive pelvic reduction. Samples from 20 populations exhibit the left bias in asymmetrical pelvic vestiges expected if Pitx1 is involved, and three have a slight, non-significant left bias. However, samples from three populations have a significant right bias, and one large sample from another population has equal frequencies of specimens with larger vestiges on the left or right side. A sample of fossil threespine stickleback also has significantly left-biased pelvic vestiges. These results suggest that silencing of Pitx1 or the developmental pathway in which it functions in the pelvis is the usual cause of pelvic reduction in most Cook Inlet populations of G. aculeatu, and that it caused pelvic reduction at least 10 million years ago in a stickleback population. A different developmental genetic mechanism is implicated for three populations with right-biased pelvic vestiges and for the population without directional asymmetry. (c) 2006 Wiley-Liss, Inc.

  16. Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality

    PubMed Central

    Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel

    2016-01-01

    Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991–2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA). PMID:27468328

  17. The effect of antenatal education in small classes on obstetric and psycho-social outcomes - a systematic review.

    PubMed

    Brixval, Carina Sjöberg; Axelsen, Solveig Forberg; Lauemøller, Stine Glenstrup; Andersen, Stig Krøger; Due, Pernille; Koushede, Vibeke

    2015-02-28

    The aims of antenatal education are broad and encompass outcomes related to pregnancy, birth, and parenthood. Both form and content of antenatal education have changed over time without evidence of effects on relevant outcomes. The effect of antenatal education in groups, with participation of a small number of participants, may differ from the effect of other forms of antenatal education due to, for example, group dynamic. The objective of this systematic review is to assess the effects of antenatal education in small groups on obstetric as well as psycho-social outcomes. Bibliographic databases (Medline, EMBASE, CENTRAL, CINAHL, Web of Science, and PsycINFO) were searched. We included randomized and quasi-randomized trials irrespective of language, publication year, publication type, and publication status. Only trials carried out in the Western world were considered in this review. Studies were assessed for bias using the Cochrane risk of bias tool. Results are presented as structured summaries of the included trials and as forest plots. We identified 5,708 records. Of these, 17 studies met inclusion criteria. Studies varied greatly in content of the experimental and control condition. All outcomes were only reported in a single or a few trials, leading to limited or uncertain confidence in effect estimates. Given the heterogeneity in interventions and outcomes and also the high risk of bias of studies, we are unable to draw definitive conclusions as to the impact of small group antenatal education on obstetric and psycho-social outcomes. Insufficient evidence exists as to whether antenatal education in small classes is effective in regard to obstetric and psycho-social outcomes. We recommend updating this review following the emergence of well-conducted randomized controlled trials with a low risk of bias. PROSPERO CRD42013004319.

  18. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  19. A cautionary note on substituting spatial subunits for repeated temporal sampling in studies of site occupancy

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.

    2009-01-01

    1. Assessing the probability that a given site is occupied by a species of interest is important to resource managers, as well as metapopulation or landscape ecologists. Managers require accurate estimates of the state of the system, in order to make informed decisions. Models that yield estimates of occupancy, while accounting for imperfect detection, have proven useful by removing a potentially important source of bias. To account for detection probability, multiple independent searches per site for the species are required, under the assumption that the species is available for detection during each search of an occupied site. 2. We demonstrate that when multiple samples per site are defined by searching different locations within a site, absence of the species from a subset of these spatial subunits induces estimation bias when locations are exhaustively assessed or sampled without replacement. 3. We further demonstrate that this bias can be removed by choosing sampling locations with replacement, or if the species is highly mobile over a short period of time. 4. Resampling an existing data set does not mitigate bias due to exhaustive assessment of locations or sampling without replacement. 5. Synthesis and applications. Selecting sampling locations for presence/absence surveys with replacement is practical in most cases. Such an adjustment to field methods will prevent one source of bias, and therefore produce more robust statistical inferences about species occupancy. This will in turn permit managers to make resource decisions based on better knowledge of the state of the system.

  20. High–frequency cluster radio galaxies: Luminosity functions and implications for SZE–selected cluster samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Nikhel; Saro, A.; Mohr, J. J.

    We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the meta-catalogue of X-ray-detected clusters of galaxies (MCXC; < z > = 0.14) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg 2 SPT-SZ survey maps at the locations of SUMSS sources, producing a multifrequency catalogue of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev–Zel’dovich Effect (SZE) signal, whichmore » is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogues. We find that the high-frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. We use the 150 GHz LF to estimate the impact of cluster radio galaxies on an SPT-SZ like survey. The radio galaxy flux typically produces a small bias on the SZE signal and has negligible impact on the observed scatter in the SZE mass–observable relation. If we assume there is no redshift evolution in the radio galaxy LF then 1.8 ± 0.7 per cent of the clusters with detection significance ξ ≥ 4.5 would be lost from the sample. As a result, allowing for redshift evolution of the form (1 + z) 2.5 increases the incompleteness to 5.6 ± 1.0 per cent. Improved constraints on the evolution of the cluster radio galaxy LF require a larger cluster sample extending to higher redshift.« less

  1. High–frequency cluster radio galaxies: Luminosity functions and implications for SZE–selected cluster samples

    DOE PAGES

    Gupta, Nikhel; Saro, A.; Mohr, J. J.; ...

    2017-01-15

    We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the meta-catalogue of X-ray-detected clusters of galaxies (MCXC; < z > = 0.14) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg 2 SPT-SZ survey maps at the locations of SUMSS sources, producing a multifrequency catalogue of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev–Zel’dovich Effect (SZE) signal, whichmore » is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogues. We find that the high-frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. We use the 150 GHz LF to estimate the impact of cluster radio galaxies on an SPT-SZ like survey. The radio galaxy flux typically produces a small bias on the SZE signal and has negligible impact on the observed scatter in the SZE mass–observable relation. If we assume there is no redshift evolution in the radio galaxy LF then 1.8 ± 0.7 per cent of the clusters with detection significance ξ ≥ 4.5 would be lost from the sample. As a result, allowing for redshift evolution of the form (1 + z) 2.5 increases the incompleteness to 5.6 ± 1.0 per cent. Improved constraints on the evolution of the cluster radio galaxy LF require a larger cluster sample extending to higher redshift.« less

  2. Identifying influencers from sampled social networks

    NASA Astrophysics Data System (ADS)

    Tsugawa, Sho; Kimura, Kazuma

    2018-10-01

    Identifying influencers who can spread information to many other individuals from a social network is a fundamental research task in the network science research field. Several measures for identifying influencers have been proposed, and the effectiveness of these influence measures has been evaluated for the case where the complete social network structure is known. However, it is difficult in practice to obtain the complete structure of a social network because of missing data, false data, or node/link sampling from the social network. In this paper, we investigate the effects of node sampling from a social network on the effectiveness of influence measures at identifying influencers. Our experimental results show that the negative effect of biased sampling, such as sample edge count, on the identification of influencers is generally small. For social media networks, we can identify influencers whose influence is comparable with that of those identified from the complete social networks by sampling only 10%-30% of the networks. Moreover, our results also suggest the possible benefit of network sampling in the identification of influencers. Our results show that, for some networks, nodes with higher influence can be discovered from sampled social networks than from complete social networks.

  3. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  4. Single electron counting using a dual MCP assembly

    NASA Astrophysics Data System (ADS)

    Yang, Yuzhen; Liu, Shulin; Zhao, Tianchi; Yan, Baojun; Wang, Peiliang; Yu, Yang; Lei, Xiangcui; Yang, Luping; Wen, Kaile; Qi, Ming; Heng, Yuekun

    2016-09-01

    The gain, pulse height resolution and peak-to-valley ratio of single electrons detected by using a Chevron configured Microchannel Plate (MCP) assembly are studied. The two MCPs are separated by a 280 μm gap and are biased by four electrodes. The purpose of the study is to determine the optimum bias voltage arrangements for single electron counting. By comparing the results of various bias voltage combinations, we conclude that good performance for the electron counting can be achieved by operating the MCP assembly in saturation mode. In addition, by applying a small reverse bias voltage across the gap while adjusting the bias voltages of the MCPs, optimum performance of electron counting can be obtained.

  5. [Attributable risk of co-morbid substance use disorder in poor observance to pharmacological treatment and the occurrence of relapse in schizophrenia].

    PubMed

    Ameller, A; Gorwood, P

    2015-04-01

    There are numerous risk factors involved in poor (incomplete) compliance to pharmacological treatment, and the associated relapse risk, for patients with schizophrenia. Comorbid substance use disorders are considered as among the most important ones, although how much their presence increase the risk of poorer observance (and higher risk of relapse) has not been yet assessed. This measure would be important, especially if the published literature on the topic provides sufficient material to perform a meta-analysis and to assess different potential biases such as those related to time (new studies are easier to publish when positive) or sample size (small samples might drive the global positive conclusion). A PubMed(®) search was made, screening the following terms between 1996 and august 2014 "Addiction AND (Observance OR Adherence) AND schizophrenia AND (French OR English [Language])" and "(Substance Abuse OR substance dependance) AND Outcome AND schizophrenia AND (French OR English [Language])". Studies were included if they describe two patients groups (schizophrenia with and without present substance use disorder) and assess the studied outcome. MetaWin(®) version 2 was used for the meta-analysis, while publication time bias relied on non-parametric correlation and the one linked to sample size was assessed through normal quantile plots. An attributable risk was also computered, on the basis of the odds-ratio derived from the meta-analysis and the prevalence of the analyzed trait (associated substance use disorder). Eight studies could be included in the meta-analysis, showing that the presence of a substance use disorder significantly increases the risk of poor observance to pharmacological treatment (OR=2.18 [1.84-2.58]), no significant bias being detected, either linked to time (rho=0.287, P=0.490) or sample size (Kendall's Tau=-0.286, P=0.322). The related attributable risk is 18.50%. Only three studies could be used for the meta-analysis of the risk of relapse associated with the presence of substance use disorders. The corresponding odds-ratio is 1.52 [1.19-1.94], and the attributable risk is 31.20%, but the search for biases could not be performed because of the small number of studies. These results shed light on the importance of comorbid substance use disorder to explain the poor observance frequently observed in patients with schizophrenia. Indeed, having an associated substance use disorder double the risk of poor compliance to pharmacological treatment, this comorbidity explaining a fifth of all factors involved. Although the number of available studies does not allow definite conclusions, the meta-analysis of prospective studies focusing this time of the risk of relapse requiring hospitalization is also in favor of a significant role of associated substance use disorder. These results argue in favor of developing specific strategies to better treat patients with dual diagnoses, i.e. schizophrenia and substance use disorder. Copyright © 2015 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  6. Attention Bias toward Threat in Pediatric Anxiety Disorders

    ERIC Educational Resources Information Center

    Roy, Amy Krain; Vasa, Roma A.; Bruck, Maggie; Mogg, Karin; Bradley, Brendan P.; Sweeney, Michael; Bergman, R. Lindsey; McClure-Tone, Erin B.; Pine, Daniel S.

    2008-01-01

    Attention bias towards threat faces is examined for a large sample of anxiety-disordered youths using visual probe task. The results showed that anxious individuals showed a selective bias towards threat due to perturbation in neural mechanisms that control vigilance.

  7. Origin of tensile strength of a woven sample cut in bias directions

    PubMed Central

    Pan, Ning; Kovar, Radko; Dolatabadi, Mehdi Kamali; Wang, Ping; Zhang, Diantang; Sun, Ying; Chen, Li

    2015-01-01

    Textile fabrics are highly anisotropic, so that their mechanical properties including strengths are a function of direction. An extreme case is when a woven fabric sample is cut in such a way where the bias angle and hence the tension loading direction is around 45° relative to the principal directions. Then, once loaded, no yarn in the sample is held at both ends, so the yarns have to build up their internal tension entirely via yarn–yarn friction at the interlacing points. The overall fabric strength in such a sample is a result of contributions from the yarns being pulled out and those broken during the process, and thus becomes a function of the bias direction angle θ, sample width W and length L, along with other factors known to affect fabric strength tested in principal directions. Furthermore, in such a bias sample when the major parameters, e.g. the sample width W, change, not only the resultant strengths differ, but also the strength generating mechanisms (or failure types) vary. This is an interesting problem and is analysed in this study. More specifically, the issues examined in this paper include the exact mechanisms and details of how each interlacing point imparts the frictional constraint for a yarn to acquire tension to the level of its strength when both yarn ends were not actively held by the testing grips; the theoretical expression of the critical yarn length for a yarn to be able to break rather than be pulled out, as a function of the related factors; and the general relations between the tensile strength of such a bias sample and its structural properties. At the end, theoretical predictions are compared with our experimental data. PMID:26064655

  8. Revisiting the child health-wealth nexus.

    PubMed

    Fakir, Adnan M S

    2016-12-01

    The causal link between a household's economic standing and child health is known to suffer from endogeneity. While past studies have exemplified the causal link to be small, albeit statistically significant, this paper aims to estimate the causal effect to investigate whether the effect of income after controlling for the endogeneity remains small in the long run. By correcting for the bias, and knowing the bias direction, one can also infer about the underlying backward effect. This paper uses an instrument variables two-stage-least-squares estimation on the Young Lives 2009 cross-sectional dataset from Andhra Pradesh, India, to understand the aforementioned relationship. The selected measure of household economic standing differentially affects the estimation. There is significant positive effect of both short-run household expenditure and long-run household wealth on child stunting, with the latter having a larger impact. The backward link running from child health to household income is likely an inverse association in our sample with lower child health inducing higher earnings. While higher average community education improved child health, increased community entertainment expenditure is found to have a negative effect. While policies catered towards improving household wealth will decrease child stunting in the long run, maternal education and the community play an equally reinforcing role in improving child health and are perhaps faster routes to achieving the goal of better child health in the short run.

  9. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  10. Counting glomeruli and podocytes: rationale and methodologies

    PubMed Central

    Puelles, Victor G.; Bertram, John F.

    2015-01-01

    Purpose of review There is currently much interest in the numbers of both glomeruli and podocytes. This interest stems from greater understanding of the effects of suboptimal fetal events on nephron endowment, the associations between low nephron number and chronic cardiovascular and kidney disease in adults, and the emergence of the podocyte depletion hypothesis. Recent findings Obtaining accurate and precise estimates of glomerular and podocyte number has proven surprisingly difficult. When whole kidneys or large tissue samples are available, design-based stereological methods are considered gold-standard because they are based on principles that negate systematic bias. However, these methods are often tedious and time-consuming, and oftentimes inapplicable when dealing with small samples such as biopsies. Therefore, novel methods suitable for small tissue samples, and innovative approaches to facilitate high through put measurements, such as magnetic resonance imaging (MRI) to estimate glomerular number and flow cytometry to estimate podocyte number, have recently been described. Summary This review describes current gold-standard methods for estimating glomerular and podocyte number, as well as methods developed in the past 3 years. We are now better placed than ever before to accurately and precisely estimate glomerular and podocyte number, and to examine relationships between these measurements and kidney health and disease. PMID:25887899

  11. Thickness dependent exchange bias in martensitic epitaxial Ni-Mn-Sn thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behler, Anna; Department of Physics, Institute for Solid State Physics, Dresden University of Technology, 01062 Dresden; Teichert, Niclas

    2013-12-15

    A thickness dependent exchange bias in the low temperature martensitic state of epitaxial Ni-Mn-Sn thin films is found. The effect can be retained down to very small thicknesses. For a Ni{sub 50}Mn{sub 32}Sn{sub 18} thin film, which does not undergo a martensitic transformation, no exchange bias is observed. Our results suggest that a significant interplay between ferromagnetic and antiferromagnetic regions, which is the origin for exchange bias, is only present in the martensite. The finding is supported by ab initio calculations showing that the antiferromagnetic order is stabilized in the phase.

  12. Biased Metropolis Sampling for Rugged Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.

    2003-11-01

    Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.

  13. Beam test results of the BTeV silicon pixel detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriele Chiodini et al.

    2000-09-28

    The authors have described the results of the BTeV silicon pixel detector beam test. The pixel detectors under test used samples of the first two generations of Fermilab pixel readout chips, FPIX0 and FPIX1, (indium bump-bonded to ATLAS sensor prototypes). The spatial resolution achieved using analog charge information is excellent for a large range of track inclination. The resolution is still very good using only 2-bit charge information. A relatively small dependence of the resolution on bias voltage is observed. The resolution is observed to depend dramatically on the discriminator threshold, and it deteriorates rapidly for threshold above 4000e{sup {minus}}.

  14. Smart phone, smart science: how the use of smartphones can revolutionize research in cognitive science.

    PubMed

    Dufau, Stephane; Duñabeitia, Jon Andoni; Moret-Tatay, Carmen; McGonigal, Aileen; Peeters, David; Alario, F-Xavier; Balota, David A; Brysbaert, Marc; Carreiras, Manuel; Ferrand, Ludovic; Ktori, Maria; Perea, Manuel; Rastle, Kathy; Sasburg, Olivier; Yap, Melvin J; Ziegler, Johannes C; Grainger, Jonathan

    2011-01-01

    Investigating human cognitive faculties such as language, attention, and memory most often relies on testing small and homogeneous groups of volunteers coming to research facilities where they are asked to participate in behavioral experiments. We show that this limitation and sampling bias can be overcome by using smartphone technology to collect data in cognitive science experiments from thousands of subjects from all over the world. This mass coordinated use of smartphones creates a novel and powerful scientific "instrument" that yields the data necessary to test universal theories of cognition. This increase in power represents a potential revolution in cognitive science.

  15. Racial and Ethnic Bias in Test Construction. Final Report.

    ERIC Educational Resources Information Center

    Green, Donald Ross

    To determine if tryout samples typically used for item selection contribute to test bias against minority groups, item analyses were made of the California Achievement Tests using seven subgroups of the standardization sample: Northern White Suburban, Northern Black Urban, Southern White Suburban, Southern Black Rural, Southern White Rural,…

  16. Racial and Ethnic Bias in Test Construction.

    ERIC Educational Resources Information Center

    Green, Donald Ross

    To determine if tryout samples typically used for item selection contribute to test bias against minority groups, item analyses were made of the California Achievement Tests using seven sub-groups of the standardization sample: Northern White Suburban, Northern Black Urban, Southern White Suburban, Southern Black Rural, Southern White Rural,…

  17. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    PubMed

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  18. Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.

    1990-01-01

    The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.

  19. Misclassification bias in areal estimates

    Treesearch

    Raymond L. Czaplewski

    1992-01-01

    In addition to thematic maps, remote sensing provides estimates of area in different thematic categories. Areal estimates are frequently used for resource inventories, management planning, and assessment analyses. Misclassification causes bias in these statistical areal estimates. For example, if a small percentage of a common cover type is misclassified as a rare...

  20. Biased and unbiased strategies to identify biologically active small molecules.

    PubMed

    Abet, Valentina; Mariani, Angelica; Truscott, Fiona R; Britton, Sébastien; Rodriguez, Raphaël

    2014-08-15

    Small molecules are central players in chemical biology studies. They promote the perturbation of cellular processes underlying diseases and enable the identification of biological targets that can be validated for therapeutic intervention. Small molecules have been shown to accurately tune a single function of pluripotent proteins in a reversible manner with exceptional temporal resolution. The identification of molecular probes and drugs remains a worthy challenge that can be addressed by the use of biased and unbiased strategies. Hypothesis-driven methodologies employs a known biological target to synthesize complementary hits while discovery-driven strategies offer the additional means of identifying previously unanticipated biological targets. This review article provides a general overview of recent synthetic frameworks that gave rise to an impressive arsenal of biologically active small molecules with unprecedented cellular mechanisms. Copyright © 2014. Published by Elsevier Ltd.

Top